Nov 17, 2025

Introduction
The landscape of drone development continues to evolve rapidly, and with it comes the need for more efficient development workflows. QGroundControl (QGC), the leading ground control station software for drones, relies heavily on QML (Qt Modeling Language) for its user interface. However, developers often struggle to find adequate resources and examples for QML development, particularly when creating custom interfaces for mission planning, telemetry visualization, and real-time UAV control. This challenge has created a significant barrier to rapid prototyping and feature development.
Qt has addressed this gap by releasing specialized code generation models: CodeLlama-QML 7B and 13B. These models represent a significant leap forward for QGC developers, offering AI-assisted coding capabilities specifically tuned for QML development. While not perfect, they can dramatically accelerate the development process by generating boilerplate code, suggesting completions, and helping developers learn QML patterns more quickly.
This tutorial explores how to leverage Qt’s CodeLlama-QML models to enhance your QGC development workflow.
Our Roadmap

Before we dive into installation, let's talk about what we're actually building here. You're setting up a complete AI coding assistant that runs entirely on your machine, no cloud services, no data leaving your computer, no monthly bills. CodeLlama-QML is Qt's specialized language model trained specifically on QML code, so it understands the syntax, patterns, and idioms of Qt development.
Combined with Ollama as your local model server and Qt Creator's AI Assistant plugin, you get intelligent code suggestions while maintaining complete privacy and control. Here's the roadmap:
Install Qt Creator
Install Ollama
Download CodeLlama-QML model
Install AI Assistant plugin
Configure Qt Creator
Clone the QGroundControl source code
Generate QML code with AI assistance
Large Language Models and Local Code Generation
Though not as common a practice, LLMs can also be run locally, providing you with complete control over your development environment without sending your code to external servers. Here are some things to consider when thinking about running your model locally.
Advantages:
Complete privacy—your code never leaves your machine
Offline functionality once the model is downloaded
No recurring API costs
Tradeoff:
Requires adequate local computing resources (capable GPU for reasonable response times)
Code Llama is Meta's freely available model specialized for code generation, built on top of the general-purpose Llama foundation model. Qt fine-tuned Code Llama on QML code samples to create CodeLlama-QML, a model optimized for QML's declarative syntax, property bindings, and component hierarchies.
Ollama: The Local Model Server
Ollama acts as the crucial middleware in this setup. Think of it as a local server that manages and runs AI models on your machine. Instead of calling out to OpenAI’s servers or another cloud provider, Qt Creator communicates with Ollama running on your localhost. Ollama handles the computational overhead of running the model, manages memory efficiently, and provides a REST API that development tools can communicate with.
Setting Up Your Development Environment
Getting CodeLlama-QML operational requires careful setup of several components. This process can be challenging initially, but following these steps methodically will establish a reliable foundation for AI-assisted QML development.
Installing Qt Creator

Begin by downloading Qt Creator from qt.io. You’ll need a commercial license to access the AI assistant plugin, which may present a barrier for hobbyist developers. However, if you’re doing commercial QGC development, you likely already have this license. Qt offers a 10-day trial if you want to evaluate the functionality before committing to a license purchase.
During installation, ensure you select the appropriate version. At the time of writing, version 6.9.3 provides the most stable experience with the AI assistant in our tests.
Installing Ollama

Navigate to Ollama.com and download the Windows version (or the appropriate version for your operating system). The installation process is straightforward—Ollama installs as a background service that automatically starts when your system boots.
Once installed, you need to download the CodeLlama-QML model.
There are two flavors of CodeLlama-QML available on Hugging Face for download:
You should download the largest model your machine can handle. Beware! Trying to squeeze a large model into a small box will bring response times to a crawl.
Open a command prompt and execute:

This command downloads approximately 4GB of data for the 7-billion parameter model. The 13-billion parameter version requires about 6GB. The initial download may take several minutes, depending on your internet connection. Ollama stores these models locally, so you only download once.
Verifying Your Installation
Before integrating with Qt Creator, verify that Ollama is functioning correctly. You can test the model directly through a curl command:

This should return JSON containing generated QML code. If you receive an error, check that:
Ollama is running (you should see it in your system tray or task manager)
The model downloaded successfully
Port 11434 isn’t blocked by your firewall
You’re using the correct model name
Configuring Qt Creator’s AI Assistant
With Ollama running and the model downloaded, you can configure Qt Creator to leverage this local AI capability. The AI assistant plugin provides the bridge between your IDE and the Ollama server.
Installing the AI Assistant Plugin

Open Qt Creator and navigate to Extensions. Make sure “Use external repository” is set to on and search for “AI assistant” in the available extensions. Click install and restart Qt Creator when prompted. This plugin enables Qt Creator to communicate with various AI backends, not just Ollama—you could also configure it to use ChatGPT, Claude, or other services if desired.
Configuring the Connection
Navigate to Edit > Preferences > AI Assistant. Under the Models section, select “CodeLlama 7B QML” from the dropdown. This tells Qt Creator which model to use when generating code. If you’ve installed the 13B version instead, select that option.

In the Advanced settings, verify the local model server URL is set to:
This URL must match the port Ollama is listening on. If you’ve configured Ollama to use a different port, adjust this setting accordingly.
Understanding the Integration
When you request code generation in Qt Creator, the following sequence occurs:
Qt Creator captures your cursor position and any comment you’ve written
It constructs a prompt including your comment and the surrounding code context
This prompt is sent via HTTP to your local Ollama server
Ollama feeds the prompt through the CodeLlama-QML model
The model generates QML code based on its training
Ollama returns this code via the REST API
Qt Creator displays the generated code as “ghost text” in your editor
This entire process typically takes 0.5 to 1 second on a machine with a capable GPU. Without GPU acceleration, response times can stretch to several minutes, making the tool impractical for regular use.
Important Shortcuts
Ctrl + '- Trigger code suggestions manuallyTab- Accept the entire suggestionEsc- Dismiss the suggestion (or navigate away)/explain
/fix
/review
/qtest
/doc
/inlinecomments
Generating QML Code
With your environment configured, you can begin generating QML code. The process follows a simple pattern: write a descriptive comment, position your cursor, and trigger code generation.
Your First Generation
Let’s start with a basic example. Create a new project and app by selecting Application (Qt)\Qt Quick Application.
In the Kit Selection Tab make sure the Hide unsuitable kits, Desktop QT 6.9.3, and Debug options are toggled on.
The inline editor can be invoked by hitting the “Ctrl + Shift + a” keys. Then add this comment:
Place your cursor on a new line immediately after the comment and press Ctrl + ' (Control plus apostrophe). After a brief pause, Qt Creator displays suggested code in gray “ghost text”:

Press Tab to accept the suggestion, or Escape to dismiss it. If you press Tab, the code becomes permanent and you can continue editing.

This generated code demonstrates several things the model understands: it recognized you wanted a Button component, inferred reasonable dimensions, applied rounded corners via the radius property, and set the color to blue. The model even added a sensible default text property.
You can also highlight the code and use the /explain command which will utilize your “Review” model set in the General tab earlier.
Working with Generated Code
The generated code serves as a starting point rather than a finished product. You’ll typically need to refine it to match your specific requirements. In the button example above, you might need to:
Adjust the dimensions to fit your layout
Change the text to something meaningful
Add signal handlers for user interaction
Modify styling to match your application’s theme
The model won’t generate perfect code every time, this is expected behavior with current AI technology. However, it significantly reduces the time spent on boilerplate code and helps you discover QML patterns you might not have known existed.
Import Statements
One common issue you’ll encounter is missing import statements. The model sometimes forgets to include necessary imports like QtQuick.Controls. When you see errors about undefined types, check that your file includes:
Adding these imports manually resolves most “unknown component” errors. As you become familiar with this pattern, checking and adding imports becomes second nature.
Practical QGC Development Examples
The true value of CodeLlama-QML emerges when tackling real QGroundControl development tasks. Let’s explore how the model handles actual development scenarios you might encounter. First let’s download QGroundControl:
This ensures you’re working with the stable branch and all necessary submodules are properly initialized.
Modifying Existing QGC Features
Consider a scenario where you’re customizing QGC’s settings interface. Navigate to the application settings module in the QGC source code. You’ll find numerous QML files defining various settings panels. Navigate to UI\AppSettings\AppSettingsModule\GeneralSettings.QML. Let’s say you want to replace an existing button with a similar one.
First, locate the existing button code:
Delete this code and replace it with a comment describing what you want:
Trigger code generation with Ctrl + '. The model might generate:
This generation is close but not identical to the original. The text differs slightly, and the model doesn’t know the internal reset function to call. However, it correctly identified that you need a QGCButton (a custom QGC component) rather than a standard Button, and it maintained the general structure.

Limitations and Realistic Expectations
Setting appropriate expectations is crucial for productive use of CodeLlama-QML. Understanding what the tool can and cannot do helps you apply it effectively without frustration.
What It Does Well
CodeLlama-QML excels at:
Boilerplate Generation: Creating basic component structures quickly
Property Completion: Suggesting appropriate properties for components
Layout Patterns: Generating common layouts like rows, columns, and grids
Learning Aid: Showing you QML patterns you might not know
Syntax Reduction: Minimizing the amount of typing for routine tasks
What It Struggles With
The model has difficulty with:
Project Context: Understanding your specific codebase structure
Complex Logic: Generating sophisticated signal handlers and business logic
State Management: Creating complex state machines or workflows
API Integration: Calling project-specific functions correctly
Performance Optimization: Recognizing when generated code might be inefficient
Performance Considerations
Hardware significantly impacts the CodeLlama-QML experience. Testing revealed:
High-end GPU (RTX 3000 series or better): 0.5-1 second response time, smooth experience
Mid-range GPU: 2-5 second response time, usable but noticeable delays
CPU-only: 30+ seconds response time, impractical for regular use
If you’re working on a machine without a capable GPU, the tool becomes frustrating rather than helpful. In this case, consider whether a GitHub Copilot subscription might provide better value, as cloud-based generation maintains consistent performance regardless of local hardware.
Conclusion
You've successfully set up a complete AI-powered QML development environment running locally on your machine. With Ollama serving CodeLlama-QML through Qt Creator's AI assistant plugin, you now have intelligent code completion that understands QML syntax, property bindings, and component hierarchies. As you work with QGroundControl and Qt, this setup will accelerate UI development by generating boilerplate layouts, suggesting component properties, and helping you learn QML patterns while you focus on building effective drone control interfaces. The more you use it, the better you'll understand when to leverage AI assistance and when to rely on your own expertise.
