AI ethics, regulation and business growth

Authors

Technology Practice Team

Gepubliceerd
8 mei 2024
Read
8 minutes
AI ethics, regulation and business growth
The technology sector is increasingly focused on artificial intelligence (AI) and the ethics of its use. AI has become one of the most transformative developments, impacting the future of business, living, and how things are done. Machines are now learning on their own, without regular human input, leading to accelerated efficiency, effectiveness, and cost reduction, thanks to AI. However, governments are concerned about AI; while it offers tremendous benefits if used correctly, it also poses significant risks if misused.

Kestria recently met with global industry leaders at the AI ethics, regulation and business growth roundtable to discuss where to draw development lines, whether to enforce global or regional rules and if self-regulation and ethical frameworks can preempt legislative action.

Key takeaways: 

Ethical concerns and regulation: The article stresses the need to address ethical concerns surrounding AI's growth, including job displacement and biased decision-making, through regulation.

Systemic risks and fairness: Industry leaders highlight systemic risks in AI applications, particularly in financial markets, emphasizing the importance of fairness, transparency, and human oversight.

Transparency and accountability: The importance of transparency in understanding AI systems' functions and balancing job loss concerns with higher-value task automation is emphasized, urging global and regional action.

Balancing innovation and ethics: There's a call for a balanced approach integrating ethical AI design from the start, with collaboration among stakeholders seen as crucial for fostering trust and advancing AI responsibly.

Addressing the concerns behind AI's rapid growth

For Ego Obi from UK, Head of Operations, Sub-Saharan Africa at Google, AI’s rapid growth concerns are valid. AI's ability to mimic human thinking and learning raises worries about job displacement and unemployment in certain industries. While AI may enhance business efficiency, it may also lead to the displacement of certain jobs. On the other hand, it can present upskilling opportunities. Another factor is the ethical concern surrounding the lack of transparency in the use of AI in decision-making, which may result in negative unintended outcomes such as bias. There's a concept called the Black Box, where it's hard to understand how AI systems make decisions or reach certain outcomes due to the algorithms used. If the data fed in is biased, the AI system can perpetuate these biases. 

AI applications gather and analyze extensive data, raising privacy concerns as individuals fear potential misuse of personal information and privacy breaches. Regulation may be needed to mitigate these risks. Then there's the fear of losing control: particularly in crucial sectors like healthcare and finance, where autonomous decisions could have unintended consequences without human oversight. Despite its potential for enhancing business efficiency, AI raises significant societal and ethical concerns that demand attention.

As per Razvan Cazan from Romania, Head of Finance & Accounting Service Delivery at DB Schenker DBS, the major concern is the systemic risk. Beyond individual users and algorithms, systemic interactions between AI applications and humans pose significant challenges. For example, financial markets have transformed with AI algorithms independently discovering investment strategies. However, this can lead to potential harm, such as AI-driven trading algorithms contributing to market volatility or systemic instability. Addressing these risks requires an ethical framework that considers the broader impact of AI on financial systems.

Secondly, there's a degree of fairness and bias in the AI decision-making process. Historical data biases can impact lending decisions, credit scores, and investment recommendations. Modern AI algorithms learn autonomously, posing challenges for transparency and accountability. Ethical dilemmas emerge as AI replaces human judgment, necessitating responsible decision-making. Ensuring fairness and reducing biases is crucial to prevent discriminatory outcomes, emphasizing transparency, explainability, and human oversight. 

Last but not least, the privacy and security risk. As AI processes sensitive financial data, striking a balance between innovation and data privacy is critical. Robust security measures are essential to protect against cyber threats and breaches.

‚With all these challenges in mind, these are just a glimpse of what is currently happening in the finance area—just the tip of the iceberg. I think the basic solution here would be ethical AI by design, a relatively new topic still in its inception stage. The ethics by design concept is the correct approach to systematically and comprehensively incorporate ethical factors into the development process of AI systems,‘ says Razvan Cazan.

Miki Velemirovich from Canada, Head of Product at Alexa Translations, highlights the transparency issue of not understanding how AI systems work is a concern. People worry that AI doing tasks better might lead to job loss, but automation also allows for a focus on higher-value tasks. While fears exist, transparency and ethics must be addressed, exploring this on a global and regional level.

Considering the potential benefits, from an enterprise perspective, AI can reduce non-value-added work, allowing for a greater focus on validating ideas, understanding customer needs, and studying consumer behavior more effectively. GenAI can help achieve this faster than before. Balancing risks and focusing on positive outcomes is crucial. The true value lies in AI plus humans, as AI alone lacks the creativity and strategic depth of experienced people. Holistically, certain roles may face automation, but AI can enhance what we do, pushing our competitive advantage forward. A balanced approach is key.

According to Costas Gritsis from Greece, Founder and CEO at Tensorgrow.ai, when people invest in this area, they don't think about ethics or dangers. ‘Everyone wants to complete successful projects as quickly as possible, aiming for a huge NPV, managing the fear of missing out and not considering ethics, bias or risk management. The businesses feel the risks others have mentioned, not in terms of human fate, but “Am I going to protect my business? Will automation push me out of the market?” So, they accelerate efforts, trying to adopt models quickly. This highlights the need for industry alignment, creating a streamlined, formal way to manage the risks of this speed.'

Ensuring AI Safety: A Question of Ethics and Regulation

As Ego Obi points out, human-centered technology is key, as humans create technology, not inherently good or bad. ‘We need to ensure AI systems don't replace human judgment and address ethical considerations and societal impacts early on. This requires identifying biases in datasets and promoting a culture of ethical responsibility throughout the AI development lifecycle. Organizations should embed AI experts and ethicists within product teams to ensure ethical solutions. Balancing innovation with societal concerns is essential for ethical oversight.’

To regulate AI effectively, training for both those building technology and policymakers is needed. Capacity-building is essential for understanding what we're regulating. This requires balancing regional and global nuances, adapting a global framework to each business and region.

Miki Velemirovich believes that customers will start asking more questions of AI providers, and they'll need to be more transparent and clear about what the technology actually does and how it protects them. ‘We serve many global organizations, including professional services, banks, and law firms, and they ask questions such as, "Where's my data going?" "Are you using it to train your models?" "Are you using third-party components?" and "How ethical are your practices?" They also ask, "Can you show me how this thing works?" To stay competitive and provide value, we need to answer serious questions; otherwise, customers may go elsewhere. It's critical for companies to realize this early on; otherwise, they'll find themselves without answers when customers seek solutions elsewhere with providers who are more transparent, clear and collaborative, and who meet ethical standards. This is an important conversation, as ultimately, we all need to be able to show how these tools provide value and protect the customer and their data,’ adds Miki Velemirovich.

Costas Gritsis thinks there are various approaches to consider, such as algorithmic transparency or consistent human oversight. This is a concern in Europe, where GDPR, as a data privacy regulation, strongly indicates whether your AI system is fair and respects human rights. However, there's an ongoing dialogue about GDPR compliance delaying European innovation and giving an advantage to the APAC region or the United States. This is challenging, as discussions about innovation in boardrooms can easily shift away from GDPR. 

‘GDPR is unfortunately seen in some circles as an innovation burden for Europe. Scaling this to the AI scene and trying to establish a solid regulatory framework raises voices that say, "Let's leave Europe to regulate while we focus on innovation and explore new economies." This presents a multi-faceted problem, requiring a turn to education and raising people's understanding to address it effectively,‘ states Costas Gritsis.

AI accountibility 

According to Miki Velemirovich, the power of AI hinges on two critical elements: data and computing power. Without data, these models are rendered ineffective, with data serving as the cornerstone. Drawing from his experience in automotive technology, training data, particularly for autonomous cars, poses a significant challenge. The complex task of teaching a car to make split-second decisions, like choosing between two objects in an unavoidable situation, underscores the responsibility of the system's manufacturer. Advancements in technology are mitigating such challenges, allowing vehicles to anticipate and address issues preemptively. Despite this progress, manufacturers must remain accountable for the quality of data used in training and decision-making processes. Even if a car operates autonomously, the driver and manufacturer share responsibility. This discussion was happening in 2017 and 2018, and remains unresolved today.

‘For example, if there's danger ahead, another car might communicate it to yours, but as a human, you might not see it due to fog, weather or distance. These are opportunities to increase safety, but it still involves teaching the system to communicate. Standardizing data across different regions is another challenge, but ultimately we need to teach these systems to be more intelligent and smarter, beyond our human limitations.‘

Exploring the future of ethical AI regulations

Razvan Cazan emphasizes the importance of establishing global standards, regardless of the type of regulation. While regional adaptations are necessary, global regulations are essential to prevent regulatory arbitrage. This is a significant undertaking that requires global collaboration. Ideally, within the next decade, we aim to have a robust regulatory framework that governs the AI landscape comprehensively.

Miki Velemirovich strongly believes that global initiatives are essential in today's interconnected world where organizations operate without borders. Thus, global frameworks are imperative. However, regional differences matter, and unfortunately, certain regions may be misrepresented due to data limitations. ‘As a global collective, we must ensure inclusivity, representing every part of the world and its unique components. Cultural sensitivities also play a role, varying across regions. We stand on the brink of transformative change with AI, which has the potential to revolutionize every aspect of our lives. While essential for ensuring ethical use, regulation must not stifle innovation. Balancing these priorities is challenging but crucial. Industry collaboration with regulators can incentivize transparency and openness, supported by governments. Emphasizing both global and regional needs, it's about embracing AI's potential while adapting to its changes, ensuring societal progress,‘ adds Miki Velemirovich.

According to Ego Obi, the ideal approach could involve a combination. A global consensus on high-level ethical principles such as privacy, fairness and bias could be established, serving as a cornerstone. Regional bodies would then focus on implementation, adapting these principles to their local contexts and addressing specific use cases. Additionally, considering regulation at a global level, the introduction of regulatory sandboxes could be beneficial. These would encourage testing and refinement of AI regulations within controlled environments across regions before broader deployment. This ensures that everyone in the AI stakeholder ecosystem, including industry, governments, users and developers, is involved in the process and shares a common goal.

Costas Gritsis believes that regulations are futile. ‘We need to prioritize the standardization of AI systems and establish a common language to manage risks effectively. To achieve this, I propose implementing a mandatory reporting system for AI incidents, similar to what exists in aviation. When there's an innovation or a near-miss event, where no people or assets are harmed but the potential is close, it should be reported. By initiating this reporting process, we can discern who is genuinely engaging in the discussion on ethics and who is merely pursuing innovation for its own sake,‘ adds Costas Gritsis.

Summary

The discussion emphasized the significance of establishing global standards while acknowledging the necessity of regional adjustments and the delicate balance between regulation and innovation. Recommendations included fostering international cooperation to develop comprehensive regulatory frameworks that address concerns surrounding privacy, bias and ethical considerations. Implementing these measures could enhance trust in AI technologies and promote responsible use, contributing to the advancement of AI technology on a global scale.

The Kestria Technology Practice Group serves early-stage innovators and established market leaders worldwide, offering deep insights into technology trends and advancements. Leveraging a robust global network, projects are accelerated and clients are connected with pioneering innovations and leaders crucial for technological breakthroughs. The commitment to delivering top-tier technology talent is enhanced by an extensive knowledge-sharing network with partners around the world.

Wij waarderen uw privacy

Door "Alles accepteren" te selecteren, geeft u toestemming voor ons gebruik van cookies. Meer leren