Skip to main content
Overview: ReX Beta Security
Updated over a week ago

ReX Beta assures the confidentiality of your company-specific data, through comprehensive technology measures that safeguard sensitive information.

Amid the current AI technology boom, our clients are keenly interested to better understand the security of using ReX. This article describes how ReX Beta keeps your data secure.

ReX Beta is safe and secure

ReX Beta is a secure, private enterprise web application hosted on a high-grade Azure Cloud Kubernetes Cluster, with authentication managed via Google Cloud's Firebase Authentication. User input is securely stored in Firebase Firestore, and any uploaded documents are saved in MongoDB's service on AWS cloud.

The application employs a multi-tenant architecture to ensure your data is kept separate and secure from other users. Your data is never shared publicly or used to train AI models.

Furthermore, ReX operates a private instance of OpenAI's GPT Large Language Model (LLM), guaranteeing that your inputs remain confidential and accessible only to you.

ReX Secure Content Storage Diagram

FAQ - Security

Is the data we input into ReX kept confidential?

  • Yes, all client data and content are private, confidential, and exclusive to your enterprise. Data, content, or documents input into our GPT model are stored in a private database on Azure. This setup ensures that only your private model can access your data for processing, with no external access to your inputs, except by Regis tech support for assistance purposes.

Is ReX based on ChatGPT?

  • No , ReX does not use ChatGPT. Instead, it operates on our private instance of the GPT model, hosted and scaled on Azure Cloud. This setup ensures enterprise-grade security and isolation, managed by Azure as a part of its secure, enterprise-ready services, leveraging OpenAI technology.

How can ReX use GPT if it's developed by OpenAI?

  • There's often confusion between OpenAI, the company, and its GPT models. OpenAI developed the GPT series (e.g., GPT-4), and while they offer access to these models through ChatGPT, Microsoft Azure enables enterprises to host private instances of the GPT model. ReX uses such a private instance, making it our bespoke application built upon OpenAI's model technology.

Does ReX use client data to train its LLM?

  • No, we do not use client or our own data to train any LLMs. Our instance of the GPT model, provided through Microsoft Azure, allows us to utilize the capabilities of their LLM while ensuring all client data remains private and secure, without the need for additional training.

Can the Azure OpenAI model be fine-tuned for specific needs?

  • No, currently, we do not fine-tune the Azure OpenAI model. Our approach, which involves using your private documents with our chat feature, typically yields excellent results without the need for fine-tuning. If you believe there's a strong case for a custom-tuned model for your specific use case, we're open to exploring this possibility with you.

Are there IP issues with auto-generating content with Regis tools and using them in my courses?

  • No, there is no issue with using the content, but establishing a protectable copyright may be difficult. Source.

  • “OpenAI's TOS assign ownership of all of ChatGPT's outputs to users (see section 3(a) of their TOS), but the more fundamental Q is whether AI-generated works are copyrightable in the first place. As the law is now, probably not, but it's being litigated.” Source

Can ReX provide source information for generated content?

  • No , content generated by standard GPT models does not include source information, as it generates original content without directly quoting pre-existing materials. However, ReX allows users to upload source documents to leverage the LLM for creating new expressions based on those documents, enabling access to the underlying references.



Did this answer your question?