Video: ISO 42001 in Practice: From Documentation to Ongoing Compliance | Duration: 3560s | Summary: ISO 42001 in Practice: From Documentation to Ongoing Compliance | Chapters: Webinar Introduction (5.36s), AI Regulation Overview (288.47s), Real-time AI Monitoring (847.99s), AI Safety Regulations (1516.81s), ISO 42001 Certification (1610.88s), Monitoring Third-Party AI (1898.715s), AI Safety Standards (2042.96s), AI Expertise Requirements (2351.46s), Healthcare Compliance Challenges (2581.21s), AI Compliance Considerations (2967.525s), ISO 42001 Resources (3066.44s), Automotive AI Implications (3218.205s), Conclusion and Thanks (3511.975s)
Transcript for "ISO 42001 in Practice: From Documentation to Ongoing Compliance": Good morning. Oh, good afternoon. Welcome to everyone who's joining. I'm just gonna give it, one minute for people to join. But, yeah, we're very excited to go through, this openly interactive webinar, all focused around ISO 42,001. I'll introduce our guest speakers in a minute. And, welcome Andre Drania from Hamburg. It's 11:00 in London, but I know some of you are dialing in, from Europe. Tobias in Switzerland. Hans from Denmark. Good to see everybody. Ben from Lincoln. That's a little bit closer to my hometown. Okay. Just give it, twenty more seconds for people to join. Everything, you're gonna hear today will be recorded. We've got a number of resources, so don't worry about taking notes. You will get a full recording and all the assets that we're gonna share with you today. So, let's kick off. So first of all, thank you so much for attending. I got the pleasure of hosting this webinar. I'm hoping it's gonna be interactive, and I think it's pretty unique, and I'll explain why. So we're gonna focus around ISO 42,001 in practice. We know that AI regulation is fast evolving, whether that's in our personal life or in our professional workplace. And what Grants is trying to do is continuously monitor and bridge the gap between the documentation and the real world. And in order to do that, and to bring this to life, it's my real pleasure to, introduce Sammy. Sammy is the cofounder and CCO of Precient Security. They're one of Drata's Audit Alliance partners. And although Altus, Drata's provided the GRC automation platform, you're not gonna be successful unless you got a audit and you've gone through that. So Sammy and the team are gonna bring their AuditLens expertise to this. And then I'm very excited, to introduce Nick. So Nick's the cofounder and CEO of Raids dot ai. They're both a customer of Drata, but they're also a partner. And what they do is they do, Nick's gonna be, provide a better explanation, but, my analogy is, if you're looking at the, AI safety monitoring on the platform, continuously looking at your behavior of AI systems and normalizing that, they got some great expertise to look at that and see what your AI is doing in your production environment. So if you think about that holistically, with Drata, the auditor, and then the tools that Rage AI are bringing, hopefully, is gonna be a pretty unique experience. So I've got just two slides to introduce you to Drata, the GRC platform. Most of you on this call are existing DRATA customers, and you typically, bought DRATA for two reasons. Typically, you wanna drive more automation and make it easier. Most of you are doing SOC two or ISO 27,000 to one as your baseline. And you the reason on the left hand side, there's all those connections, is the more we can connect to, the more we can drive that automation. And then in your GRC strategy, whether it's policy governance, looking at risk, looking at the controls under each one of those, that's what we're trying to automate and make easier for you. And when we look in particular around forty two thousand and one, there's a lot of overlap between the work that you've already done on ISO 27,001, or SOC two. So you can see here on the right hand side that when we apply the 42,001 framework in DRATA, the 65 DCFs to fulfill that, on ISO, you've already got 26 for overlap and 23 on SOC. And then all that work you've done on the policies, 23 of those overlap in the 42,001 requirement. So that's enough on Drata. I wanna hand it off to the real experts, domain expertise, in this area. So without further ado, I'm gonna introduce you to Sammy, who's the CCO and cofounder of Fesciant Security. Welcome, Sammy. Thank you, Andy. Appreciate you all joining from all over the place. I'm joining you all from Houston, Texas. So because I live in Texas, obviously, I love not so much regulations. So today's agenda that I put together for you has the following three points. So why self regulate AI? So I added the word self because kind of a big believer in self regulation as opposed to external regulation. And how can ISO 42,001 standard or certification can help you get there? Right? And why do we use strata and rates for ISO certification? And so we'll go into that a little bit later as well. Alright. So if we look at all the regulations that have been popping up around the world, obviously, there is an explosion of it. And that's because AI is moving much faster than we can keep up with it. AI is risky and useless if we don't have guardrails. The way the analogy that I like to use here is the Jurassic Park analogy. Right? You have all these species of of models out there. They're magical. They're great if they're leaving their cages. Right? As AI engineers, you spend a lot of time context engineering and and prompt engineering and putting guardrails and and kind of attaining these beasts so that they don't get out of control in production environment and and run havoc. Right? Traditional security tools like firewalls and API gateways and SIEMs are failing significantly, and Nick would chime in on that later and and tell you more about how to kinda control or monitor the behavior of artificial intelligence. So treat AI like you treat humans in your environment. As their behavior changes, anything can happen. There are real incidents. If you look at AI incidents database, there are over 5,000 events logged. There are a lot of class action lawsuits that are happening as well. So you really have to understand the the risks and the regulations if you're building using AI. Alright. So wanted to give you all some updates on the EU AI act. Obviously, the upcoming deadline that we have here is August 2. If you are building in the employment, education, public service, critical infrastructure space, you have to call us immediately because you fall into the high risk category where you have some specific requirements listed under Annex three, which I will kinda go over shortly here. But overall, you have some governance and and monitoring related controls and and requirements. If we go into the details, and I'm not I'm not gonna read all of them here, but you you will see a theme or a pattern emerging here. Risk management systems, data governance, technical documentation, more record keeping, more human oversight, and focus on quality management system. So these are the expectations that you have coming to you from the EU AI act, which is the GDPR of AI, essentially. And the entire world is following their path and and and building new regulations as well. Alright. So when should you consider implementing ISO 42,001 standard? So this standard was built, I would say, a rush to comply to help you comply with the UAI Act. Think of it as as more of a process or a playbook that you can use as a business owner to build trust with your customers, to to to unlock those revenue channels. Deploying if you're deploying high stakes or high risk AI systems, then you must implement ISO 42,001. More and more regulations are actually referring the 42,001 standard as a requirement to comply with the with the laws and regulations. And then you obviously have a lot of pressure from the stakeholders. If you're trying to raise funding from VCs or your operators are looking for new processes, more mature processes, your customers are looking for, you know, better assurance, then you have to do something about it. And so ISO 27,001 is just not enough. You have to upgrade yourself into the 42,001 standard, and DRADA can definitely help you implement those, you know, requirements. So looking under the hood, you have all of these required clauses that you must implement. The clauses four through 10 that are listed here on the right hand side, you see the purpose of those clauses listed, risk assessment, AI system life cycle, you know, monitoring, corrective actions, policies, procedures. These are just basic expectations that you need to meet, and we're not negotiable as auditors on this on this clauses. Now moving over to the control requirements or control objectives rather, there are 38 controls, and and Andy mentioned about 65 DCF or draw out control framework controls that are available to you to implement and comply with the standard. And and they're they fall under nine domains here that are listed. So what I've done is I've kind of broken them down into you know, or or map them rather to agent TKI controls because everybody is developing leveraging agent TKI architecture. So if you look at look at the this this list here on the table, on the left hand side, you have the ISO 40,001 annex a controls, and then you have some sample controls that belong to, you know, agent architecture. So prompt versioning, model testing, I'm sure Nick would talk about model drifts, you know, agent behavior monitoring. And those are, you know, just a few controls that you can implement to comply with the control objectives that are listed and and required under ISO 42,001. So I just wanted to keep things real and give you some more clarity into your environment and and what you can do today to, you know, build more trust with your customers. Alright. So how do I achieve ISO 42,001 certification? So this is where we come in as Freshen Security, audit firm, and audit partner for Trada. You define the scope. You decide as a as a as an entity, you know, what systems, what business processes you you wanna put under audit, and then you go through a risk assessment, proper documentation on Drata, continuous monitoring through Drata, as well as rates. And then you perform an internal audit and then go through an external audit with us. And you get this nice, beautiful badge that marketing, you know, built for us and charged a lot of money. And, yeah, I mean, it's it's a journey that could take four to six weeks. Obviously, reach out to us, and we can kinda help you scope things down and and handhold you throughout this process. Now why should you use DRATA or and rates to pass and maintain ISO 42,001 certification? This is what I would call the money slide because auditors have their own requirements beyond the compliance requirements that, you know, Andy talked about. Right? We love logs and tickets and documentation. As you can see here, we demand that you have, you know, information provided by entities, which which is essentially integrity, providence, evidence. These are the kind of three categories of, you know, I would say, that we apply when you show up to us post readiness. So we are looking for evidence that's tamper proof. We're looking for evidence that has time stamps. We're looking for evidence that we can find the trail through audit trails and and other structured information. Right? So it's very important that you have a system like DRATA in place so you can pass those audits with flying colors and don't struggle through this process. So and and I'm sure Nick would talk a little bit about some of these as well on his slides. With that, let me turn over to Nick. Thank you, Sami. Really interesting. You you covered a lot of the foundation there, which is is great. I can build on that. And, firstly, thanks to Drata, Andy, and the team for putting this together. I think it's a very important webinar and very, very timely. And may I also say that we're very proud to be partnering both with Prashant and Draata. So looking forward to working with you and going through this. So there's a lot I can cover. Not enough time to go through all of it, so I'm gonna sort of distill it down to to the essence. And we will be distributing a white paper from the three of us in about a week from from today that goes into a lot more detail. And we're happy to take, you know, follow on questions and continue the the conversation beyond this. But, essentially, starting here is your AI has passed every test before deployment. So everything's great in the lab. You've gone through the the compliance process. You've documented everything. Impression, super happy with the stage one audit. Everything's great. And your AI is unleashed on the world, and it's interacting with your customers. It's interacting with employees. It's interacting with various stakeholders. Do you know what it's doing at every minute of every day, or do you only hear about things going wrong after the damage has occurred? You know, we wanna flip that around. And what if you had a smoke detector that doesn't necessarily need to understand thermodynamics, that doesn't need to understand the chemistry of combustion, but is able to tell you that something is going wrong? It's drifting, and your AI is gonna be behaving differently to what you intended it to. So as Sammy described, clause nine of ISO 42,001 requires continuous monitoring. Now why do we need to do that? Because when with twenty seven zero, double o, one, you were essentially overseeing deterministic software. Right? There there was very little change happening beyond deployment. AI is a completely different beast, and it's gonna continue to evolve into something that's very, very hard to to keep track of or be able to document and understand given a snapshot of time. So ISO 42, double o, one is really designed for probabilistic software, meaning AI that you never really know what the output's gonna be. You never know if it's gonna evolve and change. So you you need real time continuous monitoring. And, however, approximately 76% of the AI we use or the AI on which we build our products are from third party vendors, not necessarily our own AI. Right? So there might be systems that we build on top of other AI. The reality is we don't have access to model internals. So a lot of companies in the monitoring space will require access to source code. They'll require access to model weights. They'll require access to other model internals, which poses two problems. You know? One, there's an IP issue. Nobody wants to provide that. And if we've built a system on top of, say, one of Anthropix APIs and we call them up and say, look, we need access to this, that, and the other so we can monitor the model, the answer is going to be no. So all you really have, for the most part, is access to the inputs and the outputs of the model. And that's where this phrase black box monitoring comes from. Right? We don't have any ability to look inside what's happening in these black boxes. So we had to create a system that is able to look at just the inputs and outputs through an API and from that be able to determine a baseline, determine what normal behavior looks like, and then monitor for any deviations, which which I can get into in a second. So what could that look like? And by the way, this is a very small list of the things that can potentially go wrong. We know that models drift, and, historically, models have drifted because the world changes. Right? So you you design your model, you build your model, You deploy the model based on data you have up until that point, which is a representation of what's in the world up until that point. But the world changes, and therefore, the model tends to drift in from what it's intended to do. There's a second massive area that's is contributing to drift, and that is that models are now beginning to self evolve. Right? So we're all hearing about recursive self improvement. We're all hearing about perpetual self improvement. This is where models are able to make changes to themselves, including down to the source code level. So a, you've got the world changing, b, you've got the models potentially changing or the systems that are built on multiple models changing. So Drift is becoming a major issue that you, again, you need to monitor for real time. Secondly is specific anomalies. Right? So if you look at each input output, which, by the way, I said earlier I could have said earlier that those input outputs should be can be anonymized. They can be obfuscated in various ways. As long as there's consistency, we're able to establish a baseline. But what a smoke detector should do like RAIDs is determine whether a specific output is anomalous. So without getting technical, if you plot your range of expected input outputs on a Gaussian distribution, if they close to center that's great, they expect it, But if you have anything on either extreme, that's an anomalous output, and a black box monitoring system like RAID should flag that and say, look, something's gone wrong. This particular loan approval or this particular fraud flagging is anomalous and requires investigation. It provides explainability. It provides an audit log, time stamped, so that a patient can come in and look at that. But also, whoever's responsible for these systems can go in and actually work with them. And then for various reasons, behavior will will deviate because of all the various changes that are happening, and your output will degrade. If your output degrades, the models are becoming less and less effective. Right. So every detection creates an order trail. So going from left to right, RAID is operating, it detects either a drift or it detects a specific anomalous input output pair. It flags that, it documents it, so the evidence is recorded immutably, it's timestamped, it's logged. We have numerous explainability components that will be able to give the auditors all the information they need to be able to understand that, but more importantly, to also give the team that's responsible for managing the AI everything they need to correct and fix the issue. And then, of course, the control mapping to ISO 42 double o one, and all of that together makes you audit ready. So when you do stage two audits and and continual audits beyond that, everything is there to ensure, a, the model is behaving as being corrected, but, b, it's all being documented for the the audit trail. And as Sammy mentioned, there's over a 100 regs and and likely more that are gonna be popping up. And if you're operating globally, trying to navigate that minefield is just impossible. Right? And AI is is is expanding and growing and changing at breakneck speeds. So to do that's impossible. So the way I view ISO 42,001 is essentially a Rosetta Stone. If you comply with ISO 42,001, you are largely compliant to these various regs that are popping up everywhere. So in summary, you build your AI, you work with DRATA to document and go through the AIMS process, raids, monitors, real time, twenty four seven, flags any drifts, flags any anomalies, allows you to flag false negatives and false positives so you get better results, and everything that pressure needs to continue the audits and give you the certification is generated so they can do the job so that ultimately you are building trust as opposed to building risk, which is the ultimate goal of the exercise. Thank you. A small typo there in my email address. It's it's it's not it's it's just. I don't know how that slipped through, but thanks very much, everyone. Thank you, Nick. Well, stay there, Nick, because we're gonna, gonna open up to q and a. But, yeah, we'll update that, typo, and we're gonna send out a copy of these. And as Nick said earlier, we're working on a white paper. We're just waiting for our marketing team to approve it, and then we're gonna send that out as well. So I know we've covered a lot of ground in the short space. We got plenty of time for questions. I think I'll start with some of the questions you've posted, in the q and a channel already, and just making sure we've not answered those. So the first one is for you, Nick. Charles has said, would you position RAIDES AI as the operational monitoring core of an AIMS for ISO 42,001 or a specialized observability tool that plugs into the broader AIMS stack. Yeah. I I think it's it's the former, but somewhat the latter two. It it it achieves two objectives. One is everything you need to stay compliant with with the regs, but, b, it's a massive risk reduction tool. So, you know, as as Sammy mentioned earlier, in regulated industries or high stakes scenarios, you want to show the world that you are taking AI safety seriously. So it's broadly an AI safety monitoring tool that can plug in pretty much anywhere within the ecosystem. Right. Excellent. Thank you. So let me look at some of these others. I will get to them all. There's quite a few. So there's one here. If a company's got limited resources, where should it start first? ISO 42,001 or the EU AI act? And I'll give you I'll I'll give you my context there, then probably, Sami, you're well placed on this one. So just to be transparent, right now, draft has got a number of out of box frameworks, which includes ISO 42,001. We do not have right now the EU AI act. That's because we're still waiting for that to be formulated and confirmed in the different EU regions, but we will be bringing that out very shortly. So what we've been doing is advising customers to move along the path on AI regulation to use 42,001 as a baseline so that you got a lot of the best practices, which will which will help you because the EU AI act is an actual external regulation that you need to get audited against. But, Sammy, I'll give you interested to get your view on if a company's got limited resources and time, which I know a lot of customers have, like, where do they where do they start on their AI regulation act? Yeah. Absolutely. Start with ISO 42,001 certification journey. Before you can get audited and certified by us, you have to go through a readiness journey, which takes time. Right? Which is not a matter of few weeks, sometimes months. But that journey can be accelerated dramatically by following a framework that you have on Drata. It has automation that can collect a lot of the lot of those, know, technical requirements that we have within the standard and then, you know, the required policies and procedures that you would need to comply with the standard as well. If you if you have implemented ISO 40,001 leveraging Drata as a as a process and a tool, and, you know, then you would be ready for an audit, and then you go through an audit journey. We definitely have some accelerated programs. One of them is called quality compliance acceleration program with DRATA that you can take advantage of. It is very affordable. So please reach out to our team, you know, Hala and the others so that we can get you intro introduced to those programs. So yeah. I mean, majority of our customers are are start ups or early start ups or tech companies. They are price sensitive. We understand that. Or you know? And so, you know, globally, about 5,000 customers go through this journey with us across multiple frameworks. So we we're happy to help, and we are tailored to to support you on this journey. For you, AI Act, you have to wait until those, you know, rules get finalized and and they are really in scope for you, but you need to prepare by implementing ISO 40,001 for sure. A a couple. of things. Sorry, Andy. A couple of things. With with the EU AI act, I think there's there's talk of the enforcement date being pushed back to 2027. So know it's been tabled, but not actually it's still sitting at August 2026, but it's more than likely gonna be pushed back. So there's a little bit of time there. And I just wanted to add that, you know, real time monitoring, safety monitoring of AI is is very inexpensive. So it's a common denominator that one should start with. From the moment you release your model out into the wild or the moment you start using a vendor's model, you know, start monitoring it to ensure that you you're sort of protecting yourselves. And there's a a great follow-up question. And, Sammy, we get asked this a lot. I'm sure you do. Like, how long does it take me to get 42,001 certification? I mean, it depends on the size of the org, the maturity of what you're dealing with, and what what you what I call your GRC baseline. But, like, guidance, Sammy, on that one, not to throw you in that difficult question, but Sure. No. Absolutely. I think, obviously, it depends on, you know, how many applications you have that are using AI, you know, how big is your organization, the user base, and then, you know, complexity of those applications if you're a developer. If you're a service organization, you know, depending on, you know, locations that you're operating in, the types of data that you're storing or processing. So there are a number of factors that go into scoping. But I would say that, you know, on an average, we for for smaller companies, we see that it takes about four to six weeks to get through the audit process, which includes, you know, stage one and stage two audits. And then prior to that, you also have to go to an internal audit, which can be done by one of our partners on the DRADA network. As external certification body, we cannot perform those audits. So there are two audits, essentially. One is an external audit, certification audit, which is done by us. And the second one is an internal audit, which you perform with another partner or internally if you have independent, you know, audit resources. So step one, getting ready on DRADA. Step two, internal audit. Step three, external certification audit. The whole process might take start to finish, you know, ninety days at most if you are, you know, not super large, you know, complex entity. No. And got we got a few questions coming up, which is great. So I'll go on to the next one. There's sort of a lot of them interconnect. So I think this is definitely one that comes up a lot from my experience. So a lot of there's a lot of AI tools out there. There's a lot of third party tools. There's lot of organizations like Gratz that are building our own AI capabilities. But if organizations are using third party AI tools, they can't see the model. So they're saying, how do we monitor something we don't control? So, Nick, I think this is a great one for you because I think that's the the like, what things why rails.ai came about. And I think you got a better answer to that than what Drata does. Yeah. Absolutely. I mean, that that's what I the whole black box API access discussion was about. So if you're using a third party model in in building a product or just using it directly through a web interface or whatever, as long as you have access to the inputs, which are coming from you, and the outputs, which are coming from the model, we were able to build a baseline and look for deviations from baseline, look for anomalies, look for drifts, and so on. So it's it's designed to monitor models because 76% of AI that's used in enterprises or companies across the board is third party AI. It's not in, you know, self built AI. And there's a there's a great follow-up question which, I'll give you my answer to it and then Nick over to you again. But would RAIDES AI or Drata consider Agent AI discovery as a product offering? So, basically, the Discover Shadow AI. So first of all, Drata wouldn't do that. What Drata is there to do is we do connect to systems, but we're doing that together evidence against your compliance posture. We're not looking at patterns of behavior and discovery against things that we're not specifically targeting for those audit controls. So this is exactly why we're partnering with RedsAI to look at that shadow infrastructure and those anomalies as I'd call it. But, Nick, is that, exactly, Asmund asked that question. It's like, this is exactly why you need RAIDs AI. Well, it's it's a it's a great question. I mean, I think with the recent OpenCLORE phenomenon and and so on. So there there's two ways I interpret that question. Question, and one is, you know, agents using the AI for whatever reason for nefarious purposes. So because Raid will look at inputs and outputs as a pair and then also just look at inputs and just outputs, so it's three levels, that will definitely cause a deviation and will be very different to the baseline of of human behavior. That's the one. Right? So that that's the that's the one part of the the question. I don't know if if the question was aiming at the second part, which is talking about specifically AgenTik AI. And, you know, if that is part of the question, then the answer is absolutely. It doesn't make a difference what type of AI system it is. We built a foundational model that's very quickly able to determine baseline. So if you're used to using agent a, we'll be able to determine the baseline and what is normal behavior for agent a. And the moment, you know, you get an a derivative type of behavior, we'll be able to flag that and and pick that up as an anomaly or drift or some kind of intrusion. Great. I know, Gianati, you've asked, if you could get a demo of, draft from forty two thousand and one. So we'll do that on a separate call. We'll reach out to you separately after this and and and talk you through that. And we could probably turn that on in your tenant so you could see how that works with your current, controls and frameworks. Right. I'm just looking through to see if there's any, there's some chat about financial fines on it, which we've answered. So, yeah, €35,000,000 or 7% of global turnover is a penalty, which, is coming on the EUA act. I think, we've add to that, just so I I saw that question from Alan and I answered it, but, you know, what's even a bigger cost is when your AI gets it wrong and you're sued, as many companies have, the damages there could be substantially higher, and we've had cases where it's been in the billions. And currently, think on average, there are five major AI safety incidences a day. So it's it's it's growing. It's a big number. The risks are high. Couple more questions and maybe for you, Sammy. So Charles is saying ISO 42,001, does it apply to large enterprises such as investment banks? Why would an investment bank consider implementing this standard? Yeah. Absolutely. So an investment bank may not need to go through a certification journey, but they would absolutely need to look at a standard framework like ISO 42,001 to organize their efforts and their control structure internal control structures for AI related activities, whether they're building their own or they're, you know, bringing in a a third party to run their operation. Right? So that that's actually happening a lot. The way we operate in those environments is that we would recommend that they implement something like TRADA to continuously monitor and get ready and implement the framework. And then we'll come in, and we will be the human in the loop for continuous validation of, you know, whether it could be a monthly or or or, you know, whatever plans they that that they they wanna wanna subscribe to. So that way they have validation of all of those controls that are operating in the environment. So that's kind of what we do. We don't offer them a cert certification because they're not selling to, you know, other enterprises as such. I hope that answered the question. Can I add to that, Andy? There there's generally, as I said earlier, showing that you take AI safety and compliance seriously builds trust. So no matter what business you are with including investment banks, you know, you do have stakeholders, you do have partners, you do have clients, you have insurers, you have many, many stakeholders and if you are consistently demonstrating that your AI is safe and that you're taking any AI that you use, you're taking AI safety seriously, that builds trust. And trust leads to very obvious positive consequences. So it's not just all about, you know, do I need to comply? Well, do you want your AI to be perceived as safe, and do you want it to be safe, you know, is is the bigger question. Yeah. And there's a good questionnaire which sort of ties into that. I'll I'll show you, from my experience, how this has changed over the last so at least the last ten years. But have you seen an uptake in companies adopting forty two thousand and one? They're here that procurement departments and enterprises are looking at this more. So, absolutely. And back in the day, if you was, supplying your software to a fintech service, typically, they would help you, make sure that you had, code of conduct and security wise was good. Then all these regulations came in and then pushed that down to the suppliers to say, if you wanna do business with us, you've gotta adhere to these standards. And so this is where we see the regulation being driven, particularly if you wanna get into a supply chain and you're building software. But, let's have a look, see if there's other questions. There's a really good question. I've gotta try and find it now. It was is it along the lines of, like, how technical do you need to be to basically navigate forty two thousand and one? So that'd be a good question for you, Sammy, in terms. of. I can I can take a stab at this one? So I don't wanna give you the auditor answer. It depends, but it is kind of the answer. If you work for an organization that's developing model, and we have audited those that you probably know in Europe, these are frontier models, foundational models. We audited them for SOC, ISO, and a whole bunch of, you know, frameworks. Obviously, you know, I couldn't take on that engagement without knowing about you know, knowing enough about how model development works. Right? So if you are an auditor or or an in internal kind of compliance manager or GRC manager on a on a project or or work for a company that is developing, you know, frontier models, obviously, you have to know a lot more about machine learning and artificial intelligence. So reach out to Nick for resources. You know, he's an expert in that field. But if you you know, depending on the the actual application of AI across your organization, every organization is is jumping on this anyway. Right? I I mean, mean, YC is not funding anybody that doesn't have AI on their, you know, sales pitch or or pitch tag, if you will. So the because of the adoption is so so so crazy that I think it would be bad advice not to tell you that you have to be you know, you have to learn the ABCs of AI. So but, yeah, do you need, you know, expert level knowledge to perform an ISO 42,001 internal audit? Not necessarily. You need to have, you know, I would say, curiosity to learn. But if you're implementing controls, then you should probably know what you're doing. So, you know, that's that's what I would recommend. And and start with generative AI and start to build something yourself on cloud code, start basics, wipe code your way, you know, and you would see why, you know, you know, you need to really tame this beast as I call it. Right? You know, if if if you're implementing voice agent, which we did internally, we implemented video analysis for evidence. We had to consult with our chief legal officer to understand, you know, the various legal implications of the implementation. We had to, you know, draw an architecture diagram and show them. So you should be able to read data flow diagrams and architecture diagrams at the very basic. Great. This is a quick one from Amit. So Amit's saying how can we learn end to end about Drata? So if you're a DRATA customer, reach out to myself and, Hallo. We our details are there. We'll put you in touch with your CSM or account manager. If you're not an existing DRATA customer, we will send you a link because we run regular weekly calls with our solution engineers to go through that end to end experience. So we got a fairly detailed questionnaire from, Anthony from Africa. Kenya, to be precise, is working in the health care sector. Two questions in reference to the webinar. So, thanks for the question, Anthony. We appreciate you attending. So so, African health care setups, patient data is often stored in hybrid system. So a combination of manual and digital. How does ICF 42,001 address compliance risk in such transitional environments? And then follow-up question, what mechanisms within ICF 42,001 can help mitigate risks of medical device interoperability failures in resource limited hospitals? That's a great question where you you got a what I'm gonna call a legacy environment moving into a digital world, but you're trying to traverse the two. There's a lot there to unpack, Sammy. It's a great question, Anthony, because you're making Sammy think. Yeah. No. I I would say that, you know, if you are going through this journey, you have to, obviously, you know, get up to speed with, you know, what's happening with various regulations and and data localization comes up all the time. And and it it's important consideration for for building any solution. So, you know, I I'm not an expert on on on data localization. You know, let me toss it over to Nick and see, you know, if he has has an opinion on this. Yeah. Thanks, Sammy. Yeah. Great great question. Right. So let let's unpack it. So health care sector data stored in different systems. Right? So you got EHR systems, EMR systems, and and so on. So we in terms of data storage, that's sort of one component. I think it touches the the the privacy aspect of the four pillars. Right? It's how that data is processed, and I guess that touches on the interoperability and where AI is used to process it and and so on that really makes an impact. So I think and what are the four pillars again, Sami? It's it's fairness, privacy, accountability, Transmit. reliability, safety, etcetera, etcetera. I think if you are complying with those four as far as AI is concerned because when we are talking about monitoring AI systems here, not necessarily just the storage of data, ISO 42, double o, one can mitigate the risks. So if you implement all of the and you adhere to all of the controls, the the 38 odd controls of ISO 42, double o, one including clause nine, which is monitoring, Monitoring will include ensuring that the models do not behave in a way that they shouldn't. Right? So they shouldn't leak personalized data that is should be kept private. Right? And I think there's other compliance standards that come into place. Alright. So I guess HIPAA and and others come to mind as well. This is not purely a an ISO 42, double o, one domain question. It's a great question, but I think there are other areas beyond ISO 42, double o, one that this touches on from from privacy standards to to so on. But how you implement the AI aspects? If you follow ISO 42 double o one, I think you're 90% of the way there. The other compliance aspects, I think, will get you the rest of the way there. So I hope that answers the question, Anthony. If if there's more, if you'd like to unpack it a bit more, just reach out to us, and I'm I'm happy to we'd be happy to provide a little bit more color for you. Anthony has got a follow on question, but I can answer this one. So he's asked, what practical tools or dashboards can hospitals use to track ongoing compliance with ISO 42,001 in real time? So if you think about how you would do that, Anthony, without a GIC automation tool like DRATA, that effectively, when you look at the requirements, which are the pillars that Nick and Sam had talked about, but in order to come see you, you need to provide that evidence. So what draft is doing is wherever it can connect to those systems, we've deconstructed those requirements in what we call controls, draft controls, and it's like the baseline constituents. So the idea is we can connect to it. We can gather that evidence, in a JSON file against the control against the requirement, and we're automating that every 24 hours. There's a lot of things that are linked to policy and process which you can augment into Drata. And then you've got manual things. So when I'm talking about maybe you've got things which are legacy, either on premise or paper based, you'd still need a way of actually trying to give that to a control owner and get that evidence in. So we would, we can automate that, but we'd all well, I'd say augment it. So that's why DRATA came out effectively that it's not just for the 2,001, but it could be HIPAA. And we cross mapped all those DCS. So you do it once, and it's got a multiplier effect. So we can maybe bring that to life a little bit more, Anthony, in a separate demo on how that works. Right. We've got ten minutes, but we still got questions coming in. So next one. I love all these questions, by the way. So Lemuel said for an SME like a recruiter or consultancy agency that uses third party AI tools to analyze candidate data, our agent is compliance with the EU AI act. Would companies like this typically need certifications such as ISO 42,001? So recruitment and consultancy agencies and how, yeah, relevant. that is. would say so. Yeah. Because if you're, you know, the if you're storing customer data and processing customer data and using AI to analyze any part of that, you would have to answer some of these questions, you know, for for those enterprise customers, especially if you are in a managed service environment. So let's say you are an outsourced recruiting process outsourced operation, right, or RPO, recruiting process outsourcing. You have a contract where you go on on on-site and and you hire folks for your clients, and you store data related to, you know, candidates. And so there are specific regulations that would impact you there, including UEI act. So absolutely recommend ISO 4,001 as a way to mitigate risks for your your use case. So Sanjay has got a good questionnaire. So is a business and human right expert with supply chain supply chain management, sustainability, compliance focus. He wants to learn more about the implementation compliance of 42,001 without any prior technical knowledge or skill. Is is saying is there any documentation available in the public domain? So and from those requirement levels, Sammy, is that is that available that you can go and access those somewhere? Yeah. You have to buy the standard. If you wanna obviously look at the ISO 42,001 standard, we can just email it to you. We we we have licensing requirements that we need to but you can look it up. I'm pretty sure a lot of companies or, you know, thought leaders out there have written blogs and articles about it. If you go to DRADA's website or or Precious Security's website, you can find a ton of resources. Reach out to us. Happy to take questions one on one and in an interactive session so we can kinda help you through those. But, yeah, you it's actually if you wanna go through the certification journey, one of the questions we ask and and the requirements is that you purchase the standard. You maintain the versions, you know, of that standard changing over time. Every five years, ISO standard goes through a change. So right now, it's the 2023 version and, you know, so on and so forth. But, yeah, you can get ton of resources by reaching out to us, and we'll probably follow-up as well, Andy. Right? So. beyond the. certification, Sammy, we we've also put together a an interactive page on our site. And I I I think you, Frank, you can put the the link to it, which really just shows you how everything maps together. Right? So you've got ISO 42 double o one. You've got, you know, things like the PRINT eighteen two eighty six directive, the EU AI act, what's happening in other parts of the world. And we've kind of made it pretty easy to follow with with all kinds of stats and information and a glossary and breaking it down. So as a good starting point, it's really an aggregation of all of our research up until about a couple of months ago. Things have moved a bit since then, so we will update it. But we're happy to provide a link to that as a starting point. But, yeah, if you want to go through the certification process and be able to take it further, yeah, I mean, Sammy is spot on. You've got to go through that process. Great. We're, we still got some more questions. We got six minutes. So next question is from John. What are the implications to the automotive industry, especially for software based systems development? Have you already implemented 42,001 for any organizations that are into this industry? Are there any best practices specifically for this industry? So, John, first of all, what we have got a lot of organizations in the what's supplying to the automotive industry or manufacturers of it. They typically, from my experience, if it's a European organization, we'll baseline with twenty seven thousand and one. And we've seen a lot of those organizations now, globally, just in automotive, looking at forty two thousand one as a baseline for AR. And in addition, TSAX, is now out of the box framework, which is, again, another complementary in in the industry that you are referring to, that we've now got available. And the whole idea with that is that they're all interlinking, so you do it once in draft and it's got a lot of multiplier effect. Yeah. Tell me, Nick, I don't know if you got anything else to add to that. Yeah. I'd say so. I mean, 27 double o one covers you if you're not using AI, right, to a large extent. So if if AI is being used in any way in the automotive industry, absolutely going through the 42 double o one process is is crucial. So, you know, we don't have any clients necessarily using it, but there are many documented cases where, for example, the Waymo one comes to to mind where these kinds of processes were not put in place that ended up in disaster and major, major costs. So I think that that's the rule of thumb. You know? Are you deploying any kind of software? Whether it be designing vehicle safety safety systems, whether it be, you know, incorporating any kind of self driving capabilities within the vehicle, any aspect relating to the automotive industry. If it's touching AI in any way, this is an absolute must. Got. We. we work with dealers in the automotive industry, and they have specific security requirements coming from, you know, Benz or BMW and others. And so they start off with ISO 27,001 and and slowly now getting into, you know, some of the AI specific questions and and queries as well. If you don't have ISO 27,001, you should be calling us again immediately because that those are becoming, you know, harder requirements. But, yeah, as as Andy mentioned earlier, the the enterprises are now catching up to asking more and more questions through their TPRM auditors to make sure that they know, first of all, if you're using AI internally or through, you know, third party partners like Raids, and what are you doing to minimize the risks to to the customer data that's being entrusted with them. So it's just make basic business sense. As a business owner, I can tell you that we spend a lot of time internally even to think about ShadowAI and and implementing controls around that and doing trainings and and all of those things. So ISO 42,001 is that playbook that gives you all of those checkboxes that you need all in one place. And so that's just easy to just start with that and and defend your organization against any future, you know, risks or regulations or litigations. And. then we got oh, sorry. Nick, was he gonna. Was gonna say the the incidentally, and and there's an interesting statistic. I I don't recall the source, but I I definitely verified it. The average cost when something goes wrong with ShadowAI is $670,000. So when you've got employees using AI without, you know, your sanction and they're pushing proprietary code or documentation and and so on through it, when something does go wrong, it it can get very, very expensive. And that's something you you're unable to budget for if you're not aware of it. So monitoring ShadowAI is absolutely crucial. And we haven't gone into much depth on it on on this call, but it's it's critical. Great. Well, we got one question, but we don't have much time to answer it. So I'll ask the question and answer it. So, approach the challenge you combine combining NIST AIRMF and ISO 42,001. So they are very distinct, but there is crossover, and that's where Drata can help you. So I wanna thank everybody, certainly Nick and Sammy for partnering partnering and providing the content and expertise. Thank you so much for everybody who's joined. We will be sending out all the collateral. You got the details to get in touch with us. And I wanna wish you, yeah, a very good afternoon, and thanks for your time. Thanks, Thank. you all. Thanks, Amy. Thanks, everyone. Thank you. Cheers.