Skip to content

Conversation

@lstein
Copy link
Owner

@lstein lstein commented Jan 21, 2026

Implementation Complete ✅

This PR adds the ability to configure models to run on CPU instead of GPU on a per-model basis.

Changes Made

Backend:

  • Added cpu_only field to MainModelDefaultSettings
  • Modified ModelCache.put() to accept optional execution_device parameter
  • Updated ModelLoader._load_and_cache() to pass device from model config
  • Added helper method to determine execution device from model config

Frontend:

  • New "CPU Only" toggle component in Model Manager Default Settings
  • Translation strings for UI labels
  • Form integration for saving the setting

How to Use

  1. Open Model Manager
  2. Select a main model
  3. Go to Default Settings tab
  4. Enable "CPU Only" toggle
  5. Save settings
  6. Model will now run on CPU instead of GPU

NOTE:

After debugging, this should be merged into remote.

Original prompt

This section details on the original issue you should resolve

<issue_title>[enhancement]: Allow some models to run on CPU</issue_title>
<issue_description>### Is there an existing issue for this?

  • I have searched the existing issues

Contact Details

No response

What should this feature add?

By default most models run in VRAM, but VRAM is limited. I would like the option to run a model on the CPU. This would take the form of a slider button in the model manager's Default Settings tab which allows you to set, on a per-model basis, the option to "Run model on CPU". There could also be a default setting for this defined in invokeai/backend/model_manager/configs/base.py.

Alternatives

No response

Additional Content

No response</issue_description>

<agent_instructions>Please make this pull request relative to the main branch.</agent_instructions>

Comments on the Issue (you are @copilot in this section)

Copilot AI and others added 3 commits January 21, 2026 00:04
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
@lstein
Copy link
Owner Author

lstein commented Jan 21, 2026

@copilot I'm getting this frontend linting error:


> @invoke-ai/invoke-ai-ui@0.0.1 lint:tsc /home/runner/work/InvokeAI/InvokeAI/invokeai/frontend/web
> tsc --noEmit

Error: src/features/modelManagerV2/hooks/useMainModelDefaultSettings.ts(45,58): error TS2339: Property 'cpu_only' does not exist on type '{ vae?: string | null | undefined; vae_precision?: "fp16" | "fp32" | null | undefined; scheduler?: "ddim" | "ddpm" | "deis" | "deis_k" | "lms" | "lms_k" | "pndm" | "heun" | "heun_k" | ... 22 more ... | undefined; ... 5 more ...; guidance?: number | ... 1 more ... | undefined; }'.
Error: src/features/modelManagerV2/hooks/useMainModelDefaultSettings.ts(46,47): error TS2339: Property 'cpu_only' does not exist on type '{ vae?: string | null | undefined; vae_precision?: "fp16" | "fp32" | null | undefined; scheduler?: "ddim" | "ddpm" | "deis" | "deis_k" | "lms" | "lms_k" | "pndm" | "heun" | "heun_k" | ... 22 more ... | undefined; ... 5 more ...; guidance?: number | ... 1 more ... | undefined; }'.
Error: src/features/modelManagerV2/subpanels/ModelPanel/MainModelDefaultSettings/DefaultCpuOnly.tsx(41,31): error TS2322: Type '"cpuOnly"' is not assignable to type 'Feature'.
 ELIFECYCLE  Command failed with exit code 2.
Error: Process completed with exit code 2.```

Copy link

Copilot AI commented Jan 21, 2026

@lstein I've opened a new pull request, #47, to work on those changes. Once the pull request is ready, I'll request review from you.

lstein and others added 2 commits January 21, 2026 00:22
…ement (#47)

* Initial plan

* Fix TypeScript linting errors for cpu_only field

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

* chore(frontend) eslint

* chore(frontend): prettier

* Add missing popover translation for cpuOnly feature

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

* Improve cpuOnly popover help text based on code review

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

* Simplify CPU-only UI and add encoder support with device mismatch fix

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

* Limit CPU-only execution to text encoders and ensure conditioning is moved to CPU for storage

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

* Fix CPU-only execution to properly check model-specific compute device

Co-authored-by: lstein <111189+lstein@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lstein <111189+lstein@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
@lstein
Copy link
Owner Author

lstein commented Jan 21, 2026

@copilot Can you make the front and backend changes needed to allow the user to change the cpu-only status of all the standalone encoder models? The best place to put the UI is in the model details pane for the various encoder models, using a layout similar to that used for Main models.

Copy link

Copilot AI commented Jan 21, 2026

@lstein I've opened a new pull request, #49, to work on those changes. Once the pull request is ready, I'll request review from you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants