js-vray-closeup-hq-1920x600.png
js-vray-closeup-hq-1920x600.png

Driving ethical innovation in the age of generative AI.

Our approach to generative AI is grounded in a deep respect for human creativity. Designed to empower, not replace, and built with a commitment to responsible, creator-first innovation.
Learn more

Building with purpose in the generative AI era.


At Chaos, we believe AI should enhance human creativity - not replace it. Our AI tools are built to support architects, engineers, contractors, and visual artists with capabilities that amplify imagination and efficiency while protecting authorship and originality. 

As generative AI becomes a fixture in design and visualization, our commitment is clear: drive innovation without compromise - respecting creators, safeguarding intellectual property, and providing legal and ethical clarity every step of the way.

Our Approach to Responsible AI at Chaos.


We designed our AI as co-creation platforms - putting professionals in control while harnessing the power of AI. It’s built to amplify creativity, not compromise it. 

Here’s how we ensure our approach stays thoughtful, responsible, and creator-first:

We use publicly available datasets.

Veras and Chaos AI Enhancer uses Stable Diffusion which was trained on pairs of images and captions taken from LAION-5B, a publicly available dataset derived from Common Crawl data scraped from the web. For other models we only use datasets which are open-source and suitable for commercial use, or have been licensed correctly for our use.

You own your outputs.

As long as your contracts allow it and you follow the rules for any third-party models or assets you use. Chaos does not claim any ownership of your outputs.

You’re a co-author, not a bystander.

Our AI tools encourage human authorship. They streamline your creative process by using your input data and letting you choose the seed, prompts, controls and outputs, while preserving the human input required for copyright protection under U.S. law.

You control what’s shared.

For Veras and Glyph, anonymous rendering and usage data is only collected if you choose to share it. You can disable this during setup or globally for Veras and Glyph via our IT configuration guide. For the AI Enhancer, anonymized input and output images are stored for QA and diagnostic purposes only.  

How We Test and Safeguard AI at Chaos.


At Chaos, responsible AI is core to how we build.

Every AI feature we design reflects our commitment to fairness, safety, and accountability, without compromising the pace of innovation. These principles don’t sit on the sidelines - they shape our product philosophy from the ground up.


Ethical training data.

We source training data with care, using curated datasets designed to support the unique needs of architectural visualization. Rather than pulling indiscriminately from the public web, we rely on properly licensed, context-relevant content that reflects professional standards and diverse design sensibilities. Our goal: inspire creativity without compromising integrity.


Quality through continuous testing.

Before any AI feature is released, it is rigorously tested - both through automated systems and hands-on human review. This process doesn’t end at launch. We continue evaluating performance in the real world to ensure results stay reliable, nuanced, and aligned with user expectations.


Ethical oversight by design.

Every AI capability we build goes through a structured review process - balancing technical evaluation with diverse human perspectives. This helps us proactively identify risks like bias or misrepresentation, and shape tools that serve a broad spectrum of users with fairness, ethics and care. 


Feedback as the guiding force.

Your voice plays a central role in how we improve. Community forums, alpha and beta programs and targeted customer outreaches and roundtables allow us to stay connected to your experiences. This ongoing dialogue ensures our AI evolves with your needs - and stays grounded in reality.

FAQ: AI and Your Data at Chaos.


As AI becomes more embedded in design and visualization workflows, we understand how essential transparency around intellectual property (IP) and data security has become. This FAQ is designed to clearly explain how your data is managed and what rights you retain when working with AI-powered tools from Chaos.


We cover our three key AI tools:

  • Veras – AI Rendering
  • AI Enhancer – AI Post-Processing
  • Glyph – AI-Driven Auto-Documentation


For each tool, you’ll find a breakdown of two critical areas: the IP behind the training data, and the measures in place to secure your data during use. Our goal is to give you confidence and clarity as you explore what AI can unlock in your creative process. In case of more questions, do not hesitate to reach out to us - we are happy to help!

Veras - Who owns the output images?

According to the U.S. Copyright Office, AI-generated content can be copyrighted when a human contributes to or meaningfully edits the image: Read the official guidance here. Veras uses your 3D model and camera view as the visual foundation ("substrate") for rendering, meaning the output is directly based on your authored content. You're also providing human input through prompt creation, seed locking, and render selection—making the process co-authored. These human input components, inherent in the use of Veras, firmly align any output with the copyrightable content requirements of the United States Copyright Office guidance.  

In the EU, the focus is more on the originality of the work. Different countries may apply slightly different thresholds for what qualifies as sufficient human input or creative originality. So while human involvement is key, the exact legal treatment may differ from country to country.

We can assure you that Chaos does not claim ownership to the outputs you create using Veras. However, we may retain certain limited rights to use those outputs — primarily to operate and improve our products and services. Additionally, we may impose certain limitations on the use of our products, services, and content (for example, prohibiting uses that violate applicable law or that involve creating physical embodiments of our proprietary content). You can find the exact wording in our License and Services Agreement.

Your rights to the outputs will depend on several factors, particularly the context in which the rendering was created (e.g., whether independently, as part of your employment, or under a client contract), as well as any third-party rights in the underlying model or assets used.

Veras - What is the underlying AI model?

Veras is powered by Stable Diffusion, which is trained on the LAION dataset, a large-scale, publicly available image-text dataset compiled from internet sources.

Veras - Does that mean someone else owns the training data or output?

No. The output generated by Veras is not a reproduction of any single training image. The model generates entirely new content based on the structure of your 3D design and your textual guidance. As long as the input is yours, the output is yours.

Veras - How is my project data handled?

Veras sends only essential data (camera view and geometry snapshot) to the cloud for rendering.

  • Encryption is deployed in the environment both in transit and at rest using industry-standard encryption protocols such as AES-256 and TLS 1.2 (REST API). Firebase and Firestore are our cloud provider, that ensures secure data storage.
  • Processing occurs on secure, industry-compliant servers located in the United States (e.g., AWS or Azure with ISO 27001 certification)
  • Your data is never used for AI model training or shared with third parties
AI Enhancer - Who owns the enhanced images?

You do. The AI Enhancer applies visual improvements to your own renderings. It does not introduce new content or alter your design intent, so the intellectual property remains entirely yours.

AI Enhancer - What model does it use?

The AI Enhancer is powered by Stable Diffusion, which is trained on the LAION dataset, a large-scale, publicly available image-text dataset compiled from internet sources, and open-source image processing techniques, such as denoising and detail enhancement. It is not trained on user-generated content and does not introduce any outside imagery.

AI Enhancer - How is image data handled?
  • Only the rendered image and metadata (specifically object masks and asset IDs) are sent - no geometry or scene data is uploaded.
  • Your content in Chaos Cloud is encrypted in transit and at rest using advanced encryption mechanisms. For stored data, we utilize the robust AES-256 algorithm. All data transfers between you and Chaos Cloud are encrypted using the most secure TLS protocol versions.
  • Processing occurs on secure, industry-compliant servers (e.g., private GCP).

We store the input & output images in the United States and use them for QA and diagnostic purposes only, as per the EULA. This data is not used for training, nor are there any plans to do so.

Glyph - Who owns the generated documents?

You do. Glyph works entirely within your BIM environment, producing output (views, sheets, tags, and dimensions) based on your model. Prompts and user inputs drive the results, and all output remains within your project and under your control.

Glyph - What model powers the Copilot?

Glyph Copilot uses OpenAI’s ChatGPT (GPT-4) to assist in executing the Glyph tasks and bundles. The LLM is trained on a wide range of publicly available internet data and does not incorporate customer content.

Glyph does not retrain or fine-tune the model using your data.

Glyph - How is your model data protected?
  • Only minimal, relevant metadata or prompt text is sent to OpenAI’s API—not the full model or views
  • All data sent to the API is encrypted in transit and handled securely
  • OpenAI’s API terms for enterprise usage prevent data from being used for training or stored after processing
  • Glyph retains all sensitive model data locally unless the user sends textual information via Glyph Copilot chat message 
Veras & Glyph - Does Chaos use my prompts or images in Veras or Glyph for machine learning?

No, however, in the EULA, we do reserve the option to do so in the future. There are two checkboxes that you can turn off if you would like to disallow anonymous collection of images, prompts, and app usage data when first launching the app.


To turn this off on a global level IT professionals can follow these instructions: https://forum.evolvelab.io/t/installing-veras-msi-configurations-remote-deployment-for-it-managers/4717 

Still have questions?

© 2025 Chaos Software EOOD. All Rights reserved. Chaos®, V-Ray® and Phoenix FD® are registered trademarks of Chaos Software EOOD in Bulgaria and/or other countries.
Close
Your cart
There are no items in your cart.