At Chaos, we believe AI should enhance human creativity - not replace it. Our AI tools are built to support architects, engineers, contractors, and visual artists with capabilities that amplify imagination and efficiency while protecting authorship and originality.
As generative AI becomes a fixture in design and visualization, our commitment is clear: drive innovation without compromise - respecting creators, safeguarding intellectual property, and providing legal and ethical clarity every step of the way.
We designed our AI as co-creation platforms - putting professionals in control while harnessing the power of AI. It’s built to amplify creativity, not compromise it.
Here’s how we ensure our approach stays thoughtful, responsible, and creator-first:
We use publicly available datasets.
Veras and Chaos AI Enhancer uses Stable Diffusion which was trained on pairs of images and captions taken from LAION-5B, a publicly available dataset derived from Common Crawl data scraped from the web. For other models we only use datasets which are open-source and suitable for commercial use, or have been licensed correctly for our use.
You own your outputs.
As long as your contracts allow it and you follow the rules for any third-party models or assets you use. Chaos does not claim any ownership of your outputs.
You’re a co-author, not a bystander.
Our AI tools encourage human authorship. They streamline your creative process by using your input data and letting you choose the seed, prompts, controls and outputs, while preserving the human input required for copyright protection under U.S. law.
You control what’s shared.
For Veras and Glyph, anonymous rendering and usage data is only collected if you choose to share it. You can disable this during setup or globally for Veras and Glyph via our IT configuration guide. For the AI Enhancer, anonymized input and output images are stored for QA and diagnostic purposes only.
At Chaos, responsible AI is core to how we build.
Every AI feature we design reflects our commitment to fairness, safety, and accountability, without compromising the pace of innovation. These principles don’t sit on the sidelines - they shape our product philosophy from the ground up.
Ethical training data.
We source training data with care, using curated datasets designed to support the unique needs of architectural visualization. Rather than pulling indiscriminately from the public web, we rely on properly licensed, context-relevant content that reflects professional standards and diverse design sensibilities. Our goal: inspire creativity without compromising integrity.
Quality through continuous testing.
Before any AI feature is released, it is rigorously tested - both through automated systems and hands-on human review. This process doesn’t end at launch. We continue evaluating performance in the real world to ensure results stay reliable, nuanced, and aligned with user expectations.
Ethical oversight by design.
Every AI capability we build goes through a structured review process - balancing technical evaluation with diverse human perspectives. This helps us proactively identify risks like bias or misrepresentation, and shape tools that serve a broad spectrum of users with fairness, ethics and care.
Feedback as the guiding force.
Your voice plays a central role in how we improve. Community forums, alpha and beta programs and targeted customer outreaches and roundtables allow us to stay connected to your experiences. This ongoing dialogue ensures our AI evolves with your needs - and stays grounded in reality.
As AI becomes more embedded in design and visualization workflows, we understand how essential transparency around intellectual property (IP) and data security has become. This FAQ is designed to clearly explain how your data is managed and what rights you retain when working with AI-powered tools from Chaos.
We cover our three key AI tools:
For each tool, you’ll find a breakdown of two critical areas: the IP behind the training data, and the measures in place to secure your data during use. Our goal is to give you confidence and clarity as you explore what AI can unlock in your creative process. In case of more questions, do not hesitate to reach out to us - we are happy to help!
According to the U.S. Copyright Office, AI-generated content can be copyrighted when a human contributes to or meaningfully edits the image: Read the official guidance here. Veras uses your 3D model and camera view as the visual foundation ("substrate") for rendering, meaning the output is directly based on your authored content. You're also providing human input through prompt creation, seed locking, and render selection—making the process co-authored. These human input components, inherent in the use of Veras, firmly align any output with the copyrightable content requirements of the United States Copyright Office guidance.
In the EU, the focus is more on the originality of the work. Different countries may apply slightly different thresholds for what qualifies as sufficient human input or creative originality. So while human involvement is key, the exact legal treatment may differ from country to country.
We can assure you that Chaos does not claim ownership to the outputs you create using Veras. However, we may retain certain limited rights to use those outputs — primarily to operate and improve our products and services. Additionally, we may impose certain limitations on the use of our products, services, and content (for example, prohibiting uses that violate applicable law or that involve creating physical embodiments of our proprietary content). You can find the exact wording in our License and Services Agreement.
Your rights to the outputs will depend on several factors, particularly the context in which the rendering was created (e.g., whether independently, as part of your employment, or under a client contract), as well as any third-party rights in the underlying model or assets used.
Veras is powered by Stable Diffusion, which is trained on the LAION dataset, a large-scale, publicly available image-text dataset compiled from internet sources.
No. The output generated by Veras is not a reproduction of any single training image. The model generates entirely new content based on the structure of your 3D design and your textual guidance. As long as the input is yours, the output is yours.
Veras sends only essential data (camera view and geometry snapshot) to the cloud for rendering.
You do. The AI Enhancer applies visual improvements to your own renderings. It does not introduce new content or alter your design intent, so the intellectual property remains entirely yours.
The AI Enhancer is powered by Stable Diffusion, which is trained on the LAION dataset, a large-scale, publicly available image-text dataset compiled from internet sources, and open-source image processing techniques, such as denoising and detail enhancement. It is not trained on user-generated content and does not introduce any outside imagery.
We store the input & output images in the United States and use them for QA and diagnostic purposes only, as per the EULA. This data is not used for training, nor are there any plans to do so.
You do. Glyph works entirely within your BIM environment, producing output (views, sheets, tags, and dimensions) based on your model. Prompts and user inputs drive the results, and all output remains within your project and under your control.
Glyph Copilot uses OpenAI’s ChatGPT (GPT-4) to assist in executing the Glyph tasks and bundles. The LLM is trained on a wide range of publicly available internet data and does not incorporate customer content.
Glyph does not retrain or fine-tune the model using your data.
No, however, in the EULA, we do reserve the option to do so in the future. There are two checkboxes that you can turn off if you would like to disallow anonymous collection of images, prompts, and app usage data when first launching the app.
To turn this off on a global level IT professionals can follow these instructions: https://forum.evolvelab.io/t/installing-veras-msi-configurations-remote-deployment-for-it-managers/4717