February 19th, 2025 4pm CET/ 9am CST
Webinar "AI Transformation with Corporate LLM"
Contact us
AI in healthcare Adopting AI in Healthcare: Benefits, Challenges and Real-Life Examples

Adopting AI in Healthcare: Benefits, Challenges and Real-Life Examples

Oct 31, 2024

19 mins read

Early AI applications in healthcare began in the 1970s with rule-based decision support systems. Yet, only in the last decade advances in machine learning, big data, and computational power allowed AI to make a revolutionary impact on healthcare.

Today, AI has penetrated nearly every aspect of healthcare and quickly become a competitive necessity. According to a 2021 survey of healthcare leaders in the United States, 95% of healthcare companies reported using AI, with 41% indicating their AI systems were fully functional.

However, alongside the extensive adoption of AI, concerns remain about its unethical use, which could lead to unequal treatment or worsen existing discrepancies in healthcare.

In this article, we will explore the benefits, ethical challenges, and risks of AI in healthcare and suggest the path forward for its ethical and effective adoption.

But first, let’s check the current impact AI makes on the industry.

AI’s Impact on Healthcare

Healthcare is one of the industries where AI has found many practical use cases. Its ability to analyze complex datasets in real time has made it invaluable in areas such as medical imaging, predictive analytics, and personalized medicine. As of 2023, the global AI in healthcare market was valued at approximately $19.27 billion, with projections suggesting it will grow at 38.5% annually from 2024 to 2030.

Regulatory advancements further support this growth. As of May 2024, the FDA had approved 882 AI/ML-enabled medical devices, including 221 approvals in 2023 alone and 45 in just the first few months of 2024. Most of the devices are being used in radiology and cardiovascular disease diagnostics.

Such a rising number of approvals highlights the increasing trust in AI’s ability to support clinical decision-making and improve medical procedures. In fact, an E&Y survey highlights that 96% of healthcare executives express trust in AI, with 94% seeing it as a positive force in their workplace. Such increased trust level further accelerates AI’s adoption in the healthcare industry.

AI/ML-enabled medical devices

Number of AI/ML-enabled medical devices in use
List of AI/ML-enabled medical devices in use by category

Generative AI is also gaining traction within healthcare. According to Deloitte’s 2024 Life Sciences and Health Care Generative AI Outlook Survey, healthcare organizations increasingly recognize the potential of generative AI to enhance care and operations. As of 2024, 75% of leading healthcare companies are either experimenting with generative AI or actively working to scale its use cases.

Recent statistics on generative AI adoption in healthcare
Recent statistics on generative AI adoption in healthcare

According to the Deloitte Center for Health Solutions, 92% of healthcare leaders see significant promise in generative AI for improving operational efficiency. Additionally, 65% believe generative AI can streamline decision-making by analyzing complex medical data and flagging potential health issues.

For example, recent studies on generative AI chatbots, like ChatGPT, show that they can assist with tasks like documenting patient visits, identifying possible causes for symptoms, and drafting clinical notes after a meeting. Yet, this is just one of many ways companies can use AI and generative AI in healthcare.

With countless opportunities for further innovation, AI has the potential to transform administrative tasks, diagnostics, and treatment processes completely. To better understand its impact, let’s look at the pros and cons of AI in healthcare and how they influence everyday clinical and operational workflows.

Benefits of AI Usage in Healthcare

A survey conducted earlier this year by Ernst & Young among healthcare and life sciences executives at the VP level revealed overwhelming confidence in the use of artificial intelligence in healthcare. The survey indicated that 95% of healthcare executives actively support the use of AI in their organizations, and about 72% are already using generative AI in some capacity.

Here are several reasons explaining this active AI support and adoption among healthcare companies.

Benefits of AI usage in healthcare
Benefits of AI usage in healthcare

Cost savings and positive ROI

A 2023 E&Y study estimated that AI could save the healthcare industry between $200 billion and $300 billion annually by streamlining processes and eliminating inefficiencies. This corresponds to as much as 5-10% of total healthcare spending. But that is just financial gains. AI adoption in healthcare can also bring nonfinancial benefits such as improved quality of care, better access, and enhanced patient and clinician experience. McKinsey states that 60% of organizations that have implemented generative AI solutions are already seeing positive ROI or expect to do so soon.

Fewer medical errors

Medical errors affect up to 7 million patients annually and cost over $20 billion in the United States alone. AI can help minimize the number of errors by analyzing vast amounts of patient data and flagging potential health issues or misdiagnoses. A recent E&Y study shows that an AI algorithm trained to analyze mammograms increased breast cancer detection by 9.4% compared to human radiographers and reduced false-positive diagnoses by 5.7%. Such improvements in diagnostic accuracy can help reduce preventable medical errors and, thus, reduce the number of medical misdiagnosis claims and related expenses.

Better access and affordability

According to Deloitte’s 2023 Health Care Consumer Survey, 53% of respondents believed generative AI could improve healthcare access, and 46% said it could reduce costs. Respondents who already had experience using generative AI in healthcare were even more optimistic: 69% indicated that AI could improve access, and 63% were optimistic about its potential to make healthcare more affordable. And there are good reasons for that. Harvard’s School of Public Health has projected that AI-driven diagnostics could cut treatment costs by up to 50% while improving health outcomes by 40%.

Automation of administrative tasks

The global shortage of clinicians is worsening, leaving healthcare workers overburdened, often resulting in medical errors and dissatisfied patients. To deal with this vicious cycle, healthcare organizations need to rethink how care is delivered. Luckily, AI can automate scheduling, billing, medical coding, and other similar time-consuming activities, taking the burden of administrative tasks from medical workers.

New Accenture research shows that generative AI could augment up to 40% of healthcare working hours, allowing clinicians to focus on higher-value tasks like patient care. Another study by Ernst & Young proves this assumption too. According to it, 94% of clinicians believe AI will enhance productivity and efficiency and enable them to provide better care.

McKinsey, in turn, also states that clinician productivity is one of the areas where generative AI is expected to deliver the highest value. The survey respondents also recognized AI’s potential to enhance patient engagement, administrative efficiency, and overall quality of care.

Areas believed to benefit the most from generative AI
Areas believed to benefit the most from generative AI

Better communication with patients

Poor communication is a significant issue in healthcare, with 83% of patients citing it as the worst part of their experience. AI-powered technologies like natural language processing, predictive analytics, and speech recognition can improve how healthcare providers communicate with patients. These tools can transform language into written text, analyze patients’ medical history and previous prescriptions, and suggest more specific information about treatment options.

Additionally, using AI-powered chatbots can improve communication between a healthcare facility and patients. A recent article in the New England Journal of Medicine exploring the use of medical AI chatbots acknowledged that while generative AI can be incredibly powerful, it also has limitations, particularly in more complex medical scenarios that require human judgment and empathy.

Still, medical AI chatbots proved to be extremely handy for scheduling appointments, sending medication reminders, and providing general health information.


While the number of companies using AI in healthcare continues to grow, the newcomers should first address challenges concerning its adoption and ethical use.

AI Challenges in Healthcare and Ways to Overcome Them

The use of AI in healthcare offers immense potential, but its adoption can bring several complex challenges that organizations should address in the first place.

Here, we’ve gathered the major challenges and tips on how you can solve them based on our company’s experience in AI software development.

Challenges hindering AI adoption in healthcare
Challenges hindering AI adoption in healthcare

Unfitting data quality and data collection issues

Data is the foundational building block for healthcare companies looking to adopt AI. Although these organizations generate vast amounts of data, much of it is unstructured, dispersed across multiple systems, and exists in various formats.

The recent Deloitte report states that data-related issues are among the top three challenges faced by companies implementing AI initiatives. In fact, 28% of respondents from life sciences and healthcare organizations reported that data collection poses a significant obstacle to their AI efforts. This is not surprising, as effective data collection must ensure that information is properly gathered, anonymized, and diverse.

For instance, information generated for human interpretation, such as free-text notes, scans, or images, may not be optimized for quantitative, computer-based analysis. This can lead to difficulties in training AI systems to interpret data correctly, potentially resulting in flawed predictions or misdiagnoses.

Want a quick consultation on AI adoption in healthcare?

Contact us
Kateryna Ilnytska | Business Development Manager

Since data quality and diversity directly influence the performance of AI and machine learning algorithms, using incomplete or poorly organized data can produce biased and unreliable results. Suppose the data predominantly comes from a limited age group, ethnic background, or specific patient population. In that case, the resulting AI models may not generalize well, which can reinforce existing biases in healthcare decisions.

Several recent studies indicate that, in some cases, using synthetic data can enhance AI model training and performance. When generated and used correctly, synthetic data can:

  • Accelerate drug discovery through simulated clinical trials
  • Improve data accessibility by filling in gaps and increasing data volume
  • Protect privacy by reproducing original data without revealing personally identifiable information

The scheme below illustrates how to create synthetic datasets that maintain the key characteristics and patterns of the original data while ensuring that no sensitive information is disclosed.

Ways to create synthetic datasets in healthcare
Ways to create synthetic datasets in healthcare

Despite all the benefits, it’s essential to acknowledge the potential drawbacks of synthetic data. For instance, if the synthetic datasets do not accurately reflect the diversity of the real-world population, they can lead to biased AI models. This can strengthen existing discrepancies in healthcare decisions and so negatively impact patient outcomes. Additionally, the process of generating synthetic data can introduce its own complexities and may not always capture the nuances present in actual patient data.

Secure data handling

Securing patient data in AI-driven healthcare solutions is paramount, as breaches can have severe consequences for privacy and trust. However, even with rigorous standards, vulnerabilities can lead to significant exposure. In 2020, security researcher Jeremiah Fowler discovered that 2.5 million medical records were exposed on the internet due to misconfigured storage by Cense AI company, specializing in SaaS-based intelligent process automation management solutions.

To avoid such exposures and protect sensitive patient information from becoming vulnerable online, you should invest in adopting comprehensive security practices, such as data encryption, multi-factor authentication, and performing routine vulnerability assessments.

Compliance with data protection laws and regulations

For AI to be effective, it must seamlessly integrate into existing healthcare systems without causing disruption or adding to the workload of healthcare professionals. This requires AI solutions to comply with data protection laws such as HIPAA, GDPR, CCPA, ISO standards, or local regulations, which ensure proper handling of patient data, informed consent, clinical validation, and legal liability.

A solution to this challenge is establishing strong governance and oversight frameworks to guide the responsible use of AI. Many healthcare organizations are already taking steps to mitigate associated risks. According to the Deloitte survey, 82% of healthcare organizations have already implemented, or plan to implement, governance frameworks to manage the risks associated with this technology, such as data privacy, accuracy, and ethical considerations.

Need to keep humans in the loop

While AI systems can process vast amounts of data and make decisions with impressive speed, they lack the nuanced understanding and empathy that human healthcare professionals provide. Besides, human oversight is essential to ensure that AI-generated recommendations are trustworthy and align with best practices and patient needs. This means that healthcare professionals must be involved in the decision-making process to verify AI outputs.

Additionally, as AI models can degrade over time due to data shifts or evolving standards, continuous human input is essential to refine these models periodically. Also, the algorithms should be validated before their usage in real-life scenarios. This helps maintain their reliability, prevents errors from compounding, and adapts the technology to reflect the latest medical guidelines.

Legacy software

According to McKinsey, 57% of healthcare companies cite risk concerns, and 29% point to technical limitations as the top challenges in scaling AI solutions. One of the reasons may lay in the use of legacy software. Many healthcare systems still rely on older infrastructure that was not designed to handle the complex demands of AI algorithms. This makes AI integration difficult and time-consuming.

Another challenge stemming from the need to modernize legacy software is that many healthcare organizations are hesitant to adopt AI more broadly due to concerns about data security and compliance.

Healthcare organizations must invest in infrastructure and collaborate with a trusted software development company to gain the confidence required to deploy AI tools at scale. However, they may face another challenge when doing so: the shortage of AI experts.

AI talent shortage

The McKinsey State of AI in 2022 Survey found that hiring AI professionals, such as data scientists, architects, and machine learning engineers, is a major challenge for 46% of respondents. Because of this difficulty in filling the technology expertise gap, 29% of healthcare organizations are not pursuing generative AI adoption.

Hopefully, a way to overcome this challenge exists.

McKinsey states that 59% of healthcare organizations have already partnered with third-party vendors to develop customized generative AI solutions. Meanwhile, 24% of respondents plan to build AI solutions in-house, while only 17% expect to buy off-the-shelf generative AI products.

Partnering with an experienced AI software development company, can help healthcare organizations accelerate AI adoption, enhance their capabilities, and start reaping the benefits AI offers more quickly.

In addition to the adoption challenges, AI usage in healthcare also brings ethical considerations that organizations should be aware of if they plan to invest in this technology.

Ethical Concerns of AI in Healthcare

As AI becomes more embedded in healthcare systems and its capabilities grow more advanced, many companies started raising concerns about the ethical matters and challenges it brings.

Let’s take a closer look.

Ethical concerns of AI usage in healthcare
Ethical concerns of AI usage in healthcare

Patient privacy concerns

Privacy and data security remain significant concerns for both consumers and healthcare organizations when adopting AI applications in healthcare. As AI systems analyze sensitive patient data, healthcare software providers must ensure they handle, store, and transmit this information securely and responsibly. An Accenture study reveals that 69% of consumers agree that healthcare providers must develop clear guidelines on biometric privacy and neurotechnology ethics to build trust.

An E&Y CEO Outlook Pulse survey found that 66% of global healthcare CEOs acknowledge more work is needed to address the social, ethical, and criminal risks in an AI-driven future. However, only 36% of these leaders said they know how to govern AI-related risks effectively.

The current gap in US AI-related regulations may explain these numbers. While some states have AI regulations as part of broader consumer privacy laws, very few have proposed legislation specifically addressing AI’s role in healthcare data protection. That is probably why 57% of organizations hesitate to pursue generative AI solutions due to security risks.

In contrast, the EU is ahead in this area. Over a year ago, it introduced the world’s first comprehensive AI law, currently used across the European Union to address AI-associated risks. So, even if your company is located outside the EU, it is still an excellent source to check to ensure patient data protection and its ethical use.

Algorithmic bias and AI hallucinations

One of the critical risks of AI in healthcare is the potential for biased or inaccurate outputs, which can directly impact patient care. Algorithmic AI bias in healthcare occurs when AI systems produce uneven results, often due to the data they are trained on. In a May 2022 report on the impact of race and ethnicity in healthcare, Deloitte emphasized the need to reassess long-standing clinical algorithms to ensure equitable care for all patients. They recommended forming teams to evaluate how race is used in these algorithms and determine whether its inclusion is justified to prevent biased treatment recommendations.

Limited transparency

Many AI models, particularly deep learning models, function as “black boxes.” While we can observe inputs and outputs, the logic and reasoning behind the decisions can remain unclear and far from understandable. Product owners and developers may often withhold model and training details to protect intellectual property, which can contribute to this opacity.

This lack of transparency affects trust. A recent study found that when patients understand how an AI system works and make decisions, they are more likely to trust it. However, achieving full transparency is particularly challenging for complex models like deep learning algorithms. Building systems that are explainable, or at least more interpretable, can be a key to ensuring AI’s wider adoption in healthcare.

Lack of trust in standalone AI usage

Deloitte’s 2024 consumer healthcare survey found that the adoption of generative AI for healthcare purposes has made little progress over the past year. The major reason for that is a growing consumer distrust, which is particularly strong among millennials and baby boomers. Based on their survey in 2023, 21% of millennials distrust the information provided by generative AI. And in 2024, this figure rose to 30%. Similarly, distrust among baby boomers increased from 24% to 32% over the same period.

The situation with trust is even worse from the clinicians’ side. Despite AI’s success in medical diagnostics and imaging, many doctors don’t trust it. A survey by Ernst & Young revealed that 83% of clinicians were concerned about using AI to personalize medical plans or assist with diagnoses.

However, the survey also showed that while many patients still trust their doctors more than AI, 71% of respondents were comfortable with their doctors using AI to explain treatment options, and 65% were open to their providers using AI to interpret diagnostic results.

Ensuring AI tools align with clinical workflows and providing clinicians with intuitive insights rather than intricate technical details could strengthen trust in AI applications. You can do this by engaging clinicians in the development process, which can make AI more useful and effective.

How Leobit Can Help with AI Adoption in Healthcare

For the last nine years, Leobit has been developing software for healthcare companies, hospitals, laboratories, research institutions, healthcare providers, and pharmaceutical businesses. Our expertise includes ensuring robust information security and delivering software compliant with HIPAA and ISO 27001 standards.

We’ve also been actively using emerging technologies, such as AI, big data analytics, blockchain, and cloud-native computing, to build healthcare software. Here are some examples of the usage of artificial intelligence in healthcare projects.

Case study 1: Development of an AI-Powered Digital Dermoscopy Application

AI-Based Digital Dermoscopy Application
AI-Based Digital Dermoscopy Application

Our customer, a European skin imaging system company, wanted to develop a dermoscopy kiosk app with a primary focus on camera functionality that would work with the company’s hardware case. The biggest challenge, however, lay in distortion (a fish-eye effect) caused by a hardware case lens being placed over the phone camera. It posed a challenge for AI skin change detection algorithms to analyze properly.

Leobit experts tackled this challenge by dynamically rewriting the camera preview using OpenCV. We developed a plugin that runs the image in real-time through an algorithm to correct the distortion. This lets us provide a stabilized image that AI algorithms can analyze accurately.

Additionally, we used a TensorFlow model to optimize the images to meet the input requirements for the company’s deep-learning algorithm. The TensorFlow model generates two outputs:

  • An assessment of whether the image is valid and suitable for deep learning analysis.
  • An AI assessment score ranging from 0 to 1 indicates the likelihood of the patient having a malignant skin lesion. A higher score reflects a higher risk.

To further improve the precision of AI analysis, we implemented functionality allowing users to tag the exact location of moles on the body.

Discover more about our input in the AI-based Digital Dermoscopy Application case study.

Case study 2: Development of a smart trichoscopy application powered by computer vision

Smart trichoscopy application
Smart trichoscopy application

Our customer, a global healthtech company specializing in hair research and diagnostics solutions, aimed to develop a trichoscopy mobile application using Flutter. The app needed to work with the company’s proprietary computer vision algorithm and analyze patients’ hair photos taken with a smartphone alone or paired with a specialized hardware case.

To ensure fast and efficient performance, Leobit developed a three-layer architecture using the BLoC architecture pattern. After analyzing the project requirements, we discovered that while Flutter was ideal for the app’s interface, it did not fully meet the app’s performance needs for more complex tasks.
To address this, our developers proposed supplementing Flutter with Kotlin for Android and Swift for iOS, using the native Android and iOS APIs.

Since the image analysis algorithms were written in C++, we used Dart FFI to allow Flutter to call these C++ functions for optimized performance. This solution reduced image processing time significantly—from 2 minutes to less than 10 seconds.

To learn more, explore our full case study on Smart trichoscopy application with a hardware case.

Summing Up

From enhancing diagnostic accuracy and saving costs to streamlining administrative tasks and improving patient outcomes, AI adoption in healthcare offers transformative benefits that could reshape the industry and make healthcare more affordable.

However, the path to widespread AI adoption is not without its challenges. Ethical concerns around transparency, data privacy, algorithmic bias, and trust are significant barriers that need careful attention. Moreover, the technical limitations of integrating AI into legacy systems, navigating regulatory landscapes, and addressing workforce readiness add layers of complexity to wider AI adoption and scaling its use cases.

Partnering with an experienced company like Leobit, which offers AI and healthcare software development services can help you choose the right approach to overcome challenges and ensure the ethical use of AI. Contact us and we’ll gladly consult you deeper on the topic.

Want a
quick tech consultation?

Contact us
Artem Matsa | Business Development Director