Promotional graphic for ComfyUI Depth Estimation update v1.3.5. Features large, glossy 3D text reading 'REAL 3D' in cyan blue, centered against a blurred purple and orange sunset cityscape background. Below the main title, white text reads 'COMFYUI v1.3.5' followed by the key features: 'Camera Intrinsics + Depth Anything V3'. Dec 08 2025 · 0 comments · 3D Modeling and Design, Advanced Digital Art, AI, AI-Generated Art, machine-learning, Tools ·

ComfyUI Depth Estimation Node v1.3.3 Released

I’m excited to announce the release of **ComfyUI Depth Estimation Node v1.3.3**! This update brings key improvements to 3D integration workflows and enhances stability for your daily generations.

🌟 What’s New?

1. Camera Intrinsics Export
The standout feature in this release is the new support for exporting **Camera Intrinsics**. This is a game-changer for anyone taking their depth maps into 3D software like Blender or Houdini. You can now get more accurate 3D reconstruction by matching the camera parameters directly from the estimation node.

2. Improved Stability
We’ve squashed several bugs that were causing intermittent issues during batch processing. The node is now more robust, ensuring your long render queues finish without hiccups.

3. Performance Tweaks
Minor optimizations have been made to the depth calculation process, shaving off valuable milliseconds from your generation time.

GIF

🔧 How to Update

The easiest way to get v1.3.3 is via the **ComfyUI Manager**:
1. Open ComfyUI.
2. Click on “Manager”.
3. Select “Fetch Updates”.
4. If the node is already installed, click “Update All” or find “ComfyUI Depth Estimation” in the list and update specifically.
5. Restart ComfyUI to apply the changes.

Alternatively, you can do a `git pull` in your `custom_nodes/ComfyUI-Depth-Estimation` directory.

🔗 Links
* GitHub Repository: Link to your GitHub Repo
* ComfyUI Manager: Link to Comfy Registry

Promotional graphic featuring 3D neon text 'Multi-Vision Toolkit v2' and 'Qwen3-VL & Flash Attention' against a dark background with a cybernetic eye illustration. Dec 02 2025 · 0 comments · AI, machine-learning, Tool, Tools, VLM ·

Vision-Toolkit Demo

php

Vision-Toolkit Demo

Advanced Computer Vision & Dataset Curation

🔍 Primary Functionality

Automated image captioning and dataset curation using state-of-the-art Vision Language Models (VLM). Streamline your LoRA training data preparation with local, privacy-focused AI.

⚡ Key Capabilities

  • Multi-Model Support: Seamless switching between Florence-2 (Speed) and Qwen3-VL (Accuracy).
  • Non-Blocking UI: Asynchronous processing ensures the interface remains responsive during heavy inference.
  • Batch Processing: Rapidly tag and caption thousands of images with custom prefixes and suffixes.

🚀 Main Benefits

  • Privacy First: All inference runs locally on your GPU—no data leaves your machine.
  • Efficiency: Reduce dataset preparation time by up to 80% compared to manual tagging.
  • Precision: High-fidelity captions specifically optimized for generative AI training.

Download Vision-Toolkit

Open Source • Local Execution • Python Based