|

Ensuring Responsible AI for IT: A Conversation with Microsoft

Digital ai ethics compass responsible use cyber virtual hologram holographic projection 3d background wallpaper trustworthy

In our webinar “How to Ensure AI for IT is Responsible, Explainable, and Trustworthy,” Microsoft Director, Azure Data & AI, Jim Brennan joined Lakeside’s Principal Data Scientist Dan Parshall, Ph.D., to discuss Responsible AI as it relates to the growing integration of AI for enterprise IT. As AI-powered tools such as the Lakeside SysTrack platform become more common, IT professionals not only must adapt to these technological shifts but also ensure that their organizations can fully realize AI’s applicable value while safeguarding critical concerns such as data privacy, security, and compliance.

The Role of Data Ownership, Privacy, and Security in Responsible AI

It goes without saying that before there was AI, there was data. Lots of it. “One of the underlying reasons that AI seems to have exploded out of nowhere is actually tied to the wealth of data that’s available now due to the cloud,” said Jim Brennan. “It’s necessary to have a lot of high-quality data to get great results with AI.” The first rule of business when setting an AI strategy, then, is establishing a clear and transparent data strategy. Who owns data? How can you secure it? What safeguards are in place to protect data privacy? For software-as-a-service (SaaS) companies, these questions matter to both your enterprise and to your customers.

As Lakeside Founder Mike Schumacher has said, “There’s a saying that data is the new oil, but to me, data is not oil but diamonds. Rich data, with many facets, is what you need to build the kind of AI models that output information you can trust.” Indeed, “data has significant value for AI,” Brennan emphasized, “and the fact of the matter is that no company is just going to share that data with an AI service unless they can be confident of their ownership of the data, how it’s being used, how it’s being secured, and so forth.” This context informs both Microsoft’s own Responsible AI policies, as well as Lakeside’s “Responsible Use of AI” statement

Despite such transparency from Cloud Service Providers and SaaS companies invested in AI, Dan Parshall pointed out that many enterprises remain cautious about sharing their data. That’s why Parshall is on a mission to debunk common misconceptions about “sharing data,” noting that “a big thing I’m trying to raise awareness about is that it is possible to share insights without sharing actual data.” He explained that it is entirely possible (with the right methods) to both leverage large volumes of insights from data while keeping stringent data protection protocols in place.

“So, when we say your data is your data is your data,” Brennan said, “we mean that we are a conduit for processing that data through the Azure AI services on behalf of Lakeside and its end customers. That means we don’t ever claim rights to your data.”

The Importance of High-Quality Data for Responsible AI 

Another critical discussion point of the webinar was the relationship between data quality and the AI model. While AI models thrive on vast amounts of data, more data isn’t always better. As Parshall highlighted, “More data does not necessarily make for better AI models because garbage in means garbage out.” Data quality, therefore, is essential for ensuring the AI can output relevant, explainable, and trustworthy outputs.  

Poor-quality data can skew results, rendering even the most sophisticated models ineffective. In contrast, high-quality and well-structured datasets can enhance model accuracy and performance. For example, a smaller, specialized language model trained exclusively on specific, focused data can deliver more reliable outputs than models trained on the broader, noisier data available on the internet.

Given the underlying relationship between data quality and AI quality, it is essential to maintain clean, relevant data, especially in enterprise environments. For IT leaders, investing in data governance practices and curating high-quality data assets are crucial steps toward making AI initiatives successful.

Responsible AI and the Importance of Ethical Practices

The webinar also touched on responsible AI practices, which include ensuring that models do not become overly restrictive or produce biased results. One method is introducing controlled randomness into decision-making models, such as those used in credit card fraud detection or hiring processes. This approach allows IT teams to maintain a level of flexibility and continually refine models based on real-world outcomes, reducing the risk of bias or overly rigid automation.

For instance, in fraud detection, allowing occasional transactions that the model flags as suspicious can provide a learning opportunity without unduly impacting the customer experience. Similarly, in hiring processes, incorporating randomness ensures that potentially suitable candidates are not filtered out prematurely, giving organizations a chance to reassess and improve their models.

The Evolving Role of IT Professionals in an AI-First World 

As AI continues to reshape enterprise IT, the skills required of IT professionals are evolving. No longer confined to just managing hardware or troubleshooting systems, IT leaders are now expected to be well-versed in data science principles, AI tools, and advanced analytics. While not everyone in IT needs to become a data scientist, there’s a growing expectation that IT teams understand how AI models work, how to manage data for these models, and how to apply AI-driven insights to solve business challenges.

This shift represents a significant opportunity for IT professionals willing to embrace lifelong learning. For instance, tools such as Power BI, which were traditionally seen as business intelligence platforms, are now increasingly being integrated with AI capabilities. Both Brennan and Parshall pointed out that IT professionals who can bridge the gap between traditional IT operations and modern data science stand to play a critical role in guiding their organizations through digital transformation initiatives.

Responsible AI for the Future

The integration of AI into enterprise IT is more than just a technical shift — it is a fundamental change in how organizations approach data, privacy, and decision-making. For IT leaders, the key takeaways from this discussion are clear: prioritize data ownership and security, understand the trade-offs between data quality and quantity, and embrace the evolving role of IT as data-driven and AI-powered. By adopting these principles, organizations can navigate the complexities of AI deployment while ensuring responsible and effective use of these transformative technologies.

Share to:

Subscribe to the Lakeside Newsletter

Receive platform tips, release updates, news and more

Related Posts