Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions plugins/AHavenVLMConnector/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Changelog

All notable changes to the A Haven VLM Connector project will be documented in this file.

## [1.0.0] - 2025-06-29

### Added
- **Initial release**
143 changes: 143 additions & 0 deletions plugins/AHavenVLMConnector/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# A Haven VLM Connector

A StashApp plugin for Vision-Language Model (VLM) based content tagging and analysis. This plugin is designed with a **local-first philosophy**, empowering users to run analysis on their own hardware (using CPU or GPU) and their local network. It also supports cloud-based VLM endpoints for additional flexibility. The Haven VLM Engine provides advanced automatic content detection and tagging, delivering superior accuracy compared to traditional image classification methods.

## Features

- **Local Network Empowerment**: Distribute processing across home/office computers without cloud dependencies
- **Context-Aware Detection**: Leverages Vision-Language Models' understanding of visual relationships
- **Advanced Dependency Management**: Uses PythonDepManager for automatic dependency installation
- **Enjoying Funscript Haven?** Check out more tools and projects at https://github.com/Haven-hvn

## Requirements

- Python 3.8+
- StashApp
- PythonDepManager plugin (automatically handles dependencies)
- OpenAI-compatible VLM endpoints (local or cloud-based)

## Installation

1. Clone or download this plugin to your StashApp plugins directory
2. Ensure PythonDepManager is installed in your StashApp plugins
3. Configure your VLM endpoints in `haven_vlm_config.py` (local network endpoints recommended)
4. Restart StashApp

The plugin automatically manages all dependencies.

## Why Local-First?

- **Complete Control**: Process sensitive content on your own hardware
- **Cost Effective**: Avoid cloud processing fees by using existing resources
- **Flexible Scaling**: Add more computers to your local network for increased capacity
- **Privacy Focused**: Keep your media completely private
- **Hybrid Options**: Combine local and cloud endpoints for optimal flexibility

```mermaid
graph LR
A[User's Computer] --> B[Local GPU Machine]
A --> C[Local CPU Machine 1]
A --> D[Local CPU Machine 2]
A --> E[Cloud Endpoint]
```

## Configuration

### Easy Setup with LM Studio

[LM Studio](https://lmstudio.ai/) provides the easiest way to configure local endpoints:

1. Download and install [LM Studio](https://lmstudio.ai/)
2. [Search for or download](https://huggingface.co/models) a vision-capable model; tested with : (in order of high to low accuracy) zai-org/glm-4.6v-flash, huihui-mistral-small-3.2-24b-instruct-2506-abliterated-v2, qwen/qwen3-vl-8b, lfm2.5-vl
3. Load your desired Model
4. On the developer tab start the local server using the start toggle
5. Optionally click the Settings gear then toggle *Serve on local network*
5. Optionally configure `haven_vlm_config.py`:

By default locahost is included in the config, **remove cloud endpoint if you don't want automatic failover**
```python
{
"base_url": "http://localhost:1234/v1", # LM Studio default
"api_key": "", # API key not required
"name": "lm-studio-local",
"weight": 5,
"is_fallback": False
}
```

### Tag Configuration

```python
"tag_list": [
"Basketball point", "Foul", "Break-away", "Turnover"
]
```

### Processing Settings

```python
VIDEO_FRAME_INTERVAL = 2.0 # Process every 2 seconds
CONCURRENT_TASK_LIMIT = 8 # Adjust based on local hardware
```

## Usage

### Tag Videos
1. Tag scenes with `VLM_TagMe`
2. Run "Tag Videos" task
3. Plugin processes content using local/network resources

### Performance Tips
- Start with 2-3 local machines for load balancing
- Assign higher weights to GPU-enabled machines
- Adjust `CONCURRENT_TASK_LIMIT` based on total system resources
- Use SSD storage for better I/O performance

## File Structure

```
AHavenVLMConnector/
├── ahavenvlmconnector.yml
├── haven_vlm_connector.py
├── haven_vlm_config.py
├── haven_vlm_engine.py
├── haven_media_handler.py
├── haven_vlm_utility.py
├── requirements.txt
└── README.md
```

## Troubleshooting

### Local Network Setup
- Ensure firewalls allow communication between machines
- Verify all local endpoints are running VLM services
- Use static IPs for local machines
- Check `http://local-machine-ip:port/v1` responds correctly

### Performance Optimization
- **Distribute Load**: Use multiple mid-range machines instead of one high-end
- **GPU Prioritization**: Assign highest weight to GPU machines
- **Network Speed**: Use wired Ethernet connections for faster transfer
- **Resource Monitoring**: Watch system resources during processing

## Development

### Adding Local Endpoints
1. Install VLM service on network machines
2. Add endpoint configuration with local IPs
3. Set appropriate weights based on hardware capability

### Custom Models
Use any OpenAI-compatible models that support:
- POST requests to `/v1/chat/completions`
- Vision capabilities with image input
- Local deployment options

### Log Messages

Check StashApp logs for detailed processing information and error messages.

## License

This project is part of the StashApp Community Scripts collection.
22 changes: 22 additions & 0 deletions plugins/AHavenVLMConnector/ahavenvlmconnector.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: A Haven VLM Connector
# requires: PythonDepManager
description: Tag videos with Vision-Language Models using any OpenAI-compatible VLM endpoint
version: 1.0.0
url: https://github.com/stashapp/CommunityScripts/tree/main/plugins/AHavenVLMConnector
exec:
- python
- "{pluginDir}/haven_vlm_connector.py"
interface: raw
tasks:
- name: Tag Videos
description: Run VLM analysis on videos with VLM_TagMe tag
defaultArgs:
mode: tag_videos
- name: Collect Incorrect Markers and Images
description: Collects data from markers and images that were VLM tagged but were manually marked with VLM_Incorrect due to the VLM making a mistake. This will collect the data and output as a file which can be used to improve the VLM models.
defaultArgs:
mode: collect_incorrect_markers
- name: Find Marker Settings
description: Find Optimal Marker Settings based on a video that has manually tuned markers and has been processed by the VLM previously. Only 1 video should have VLM_TagMe before running.
defaultArgs:
mode: find_marker_settings
98 changes: 98 additions & 0 deletions plugins/AHavenVLMConnector/exit_tracker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
"""
Comprehensive sys.exit tracking module
Instruments all sys.exit() calls with full call stack and context
"""

import sys
import traceback
from typing import Optional

# Store original sys.exit
original_exit = sys.exit

# Track if we've already patched
_exit_tracker_patched = False

def install_exit_tracker(logger=None) -> None:
"""
Install the exit tracker by monkey-patching sys.exit

Args:
logger: Optional logger instance (will use fallback print if None)
"""
global _exit_tracker_patched, original_exit

if _exit_tracker_patched:
return

# Store original if not already stored
if hasattr(sys, 'exit') and sys.exit is not original_exit:
original_exit = sys.exit

def tracked_exit(code: int = 0) -> None:
"""Track sys.exit() calls with full call stack"""
# Get current stack trace (not from exception, but current call stack)
stack = traceback.extract_stack()

# Format the stack trace, excluding this tracking function
stack_lines = []
for frame in stack:
# Skip internal Python frames and this tracker
if ('tracked_exit' not in frame.filename and
'/usr/lib' not in frame.filename and
'/System/Library' not in frame.filename and
'exit_tracker.py' not in frame.filename):
stack_lines.append(
f" File \"{frame.filename}\", line {frame.lineno}, in {frame.name}\n {frame.line}"
)

# Take last 15 frames to see the full call chain
stack_str = '\n'.join(stack_lines[-15:])

# Get current exception info if available
exc_info = sys.exc_info()
exc_str = ""
if exc_info[0] is not None:
exc_str = f"\n Active Exception: {exc_info[0].__name__}: {exc_info[1]}"

# Build the error message
error_msg = f"""[DEBUG_EXIT_CODE] ==========================================
[DEBUG_EXIT_CODE] sys.exit() called with code: {code}
[DEBUG_EXIT_CODE] Call stack (last 15 frames):
{stack_str}
{exc_str}
[DEBUG_EXIT_CODE] =========================================="""

# Log using provided logger or fallback to print
if logger:
try:
logger.error(error_msg)
except Exception as log_error:
print(f"[EXIT_TRACKER_LOGGER_ERROR] Failed to log: {log_error}")
print(error_msg)
else:
print(error_msg)

# Call original exit
original_exit(code)

# Install the tracker
sys.exit = tracked_exit
_exit_tracker_patched = True

if logger:
logger.debug("[DEBUG_EXIT_CODE] Exit tracker installed successfully")
else:
print("[DEBUG_EXIT_CODE] Exit tracker installed successfully")

def uninstall_exit_tracker() -> None:
"""Uninstall the exit tracker and restore original sys.exit"""
global _exit_tracker_patched, original_exit

if _exit_tracker_patched:
sys.exit = original_exit
_exit_tracker_patched = False

# Auto-install on import (can be disabled by calling uninstall_exit_tracker())
if not _exit_tracker_patched:
install_exit_tracker()
Loading