The new advances that artificial intelligence has achieved have led more and more companies to focus their eyes on the adoption of this new technology. However, the questions regarding its use and regulation are beginning to gain greater relevance.
(See: The company that has become a ‘superpower’ thanks to AI).
Brad Smith, President of Microsoft, He stated that today the company has almost 350 people working on AI, which allows them to implement best practices in order to build AI systems.”safe, secure and transparent” designed to benefit society.
Now, under that gaze and experienceMicrosoft released a new report titled ‘Governing AI: A Blueprint for the Future’, in which he highlighted five keys to implement in order to improve the lives of citizens while developing new controls by governments. This is how the first pillar is focused on the security framework and its implementation.
Here the technology company specifies the importance of consolidating an AI risk management framework that allows governments to promote trust and responsibility. Along these lines, Microsoft affirms that joint work with industry leaders will be essential to “Recognize the pace of AI advances”.
(See: The West seeks common standards for Artificial Intelligence).
the second key focuses on “require effective safety brakes” for AI systems that control critical infrastructure such as the city’s electricity grid, water system, and traffic flows.
“The new laws would require operators of these systems to build security brakes into high-risk AI systems by design. The government would then ensure that operators test high-risk systems on a regular basis, to ensure system security measures are effective.”, the report details.
He third approach suggests developing “a broad legal and regulatory framework based on the technology architecture for AI.”
“In summary, the law should assign various regulatory responsibilities to different actors based on their role in managing different aspects of AI technology. This should first apply existing legal protections in the application layer for the use of AI”, the report highlights.
(See: ‘Artificial intelligence could lead humanity to its extinction’).
The fourth key seeks to promote transparency and ensure academic and non-profit access to artificial intelligence.
“While there are some major tensions between transparency and the need for security, there are plenty of opportunities to make AI systems more transparent in a responsible way. That’s why Microsoft is committing to an annual AI transparency report and other steps to expand the transparency of our AI services.”, frames the report.
The report calls for seeking public-private partnerships to use AI.
(See: This is how Microsoft’s Bing search engine works with ChatGPT).
PORTFOLIO