With the release of ChatGPT by OpenAI early this year (2023), a wave of excitement and fear has hit society about these AI tools. We are now hearing stories about students who are using these tools to write their term papers, companies using these tools to write their social media posts, and a slew of deep fake videos that leave us questioning what is real and what isn’t.
This is the first Episode of Risk and Robots where we discuss AI with someone who is actually building it. Ben Van Roo is the CEO of Yurts.ai. Yurts was founded in 2022 (so pre-ChatGPT…they didn’t just jump into the craze). Yurts is solving two major issues around AI, one dealing with the trust and credibility of AI output and the other dealing with security of data. (FULL Transcript below)
A common word we hear resurrected today is the word “hallucination”. The AI tools can glitch. They can bring back results where you scratch your head and wonder how in the world it came up with such output given the prompting you provided it. The Yurts.ai platform is highly focused on accuracy. But because of their deep domain knowledge of AI and the models used to execute on it, Yurts knows that 100% accuracy is not in the cards. So much of their work revolves around creating the platform with the right models, but also creating a layer of transparency, so users can see how the platform arrived at some output, including linking to the source documents that the model used to generate the output.
Just as important is who can have access to the source documents and who can have access to the platform output. Yurts’ initial work is with governmental and defense oriented organizations. Security might be the #1 criteria and Yurts’ platform can segregate data and output by degrees of security. There is just data that cannot be available to the outside world in any shape and then there is data you may want to make available to your clients or to the public via a chatbot. Yurts’ platform creates those security layers so what needs to be private always remains private while creating the appropriate authentication for clients or others to get access to the “other” non-security-critical output.
An insurance example would be a carrier who needs a private AI interface for executives and one for employees. But they also would like an AI interface for agents who are appointed with them. Those agents will only get interface access to the data that the carrier authorizes the agent to have access to, but not just block, but quarantine any sensitive company data from access. For sensitive data, the models and platform will literally not have access to those documents and data.
Here is a video of a new hire of Yurts, using the Yurts interface to ask it questions about company policy:
What the video showed (and what we correspondingly discussed in the interview) is that employees are just spending way WAY too much time searching for things. Yes…you have a company policy on vacations, but for the employee to read about it almost always requires them to go to some portal, login and look for the doc. It might not be tremendously time consuming, but now multiply that by the scores of things we need to find everyday. This is the productivity solution that Yurts is solving for with their AI technology.