According to Elsevier’s report called “Insights 2024: Attitudes toward AI”, only 22% of Indians are leveraging generative AI for work purposes in healthcare and research while 76% plan to use it in the next two to five years.
In comparison to North America (30%), more respondents from APAC, including India, have used AI for work-related purposes (34%).
The study which surveyed nearly 3,000 people globally working in research and healthcare further discovered that while 95% believe that it is a great source of knowledge, 87% think that using generative AI tools can improve work quality.
Recently CP Gurnani, former tech Mahindra chief said AIM MachineCon GCC Summit, “I can only say there is no human being, including you, who is not working on generative AI.”
Can Generative AI Boost Productivity?
Most studies conducted globally have shown that AI, rather generative AI, can improve productivity for employees, allowing them to have time to focus on other areas of work, as well.
A fresh study from Capgemini, also published today, shows that generative AI is expected to play a key role in augmenting the software workforce, assisting in more than 25% of software design, development, and testing work in the next two years.
But this is contrasting to Genpact’s recently published report “The GenAI Countdown” which stated that 52% of respondents expressed concerns that an overemphasis on productivity could lead to a negative impact on employee experiences.
Even when AIM reached out to multiple companies to understand how generative AI has boosted its productivity leading to success, such as through faster product deployment, we received no responses.
Challenges Persist
In India, many healthcare companies are actively tapping into the potential of generative AI. From startups like Practo and Healthify to hospital chains like Apollo and Narayana, everyone is exploring this segment.
But in India, as well, as globally, the primary concerns around this technology revolve around misinformation. As per the survey, around 94% of people are concerned about AI being used for misinformation, 86% worry about critical errors or mishaps and 81% fear AI will erode critical thinking.
However, there is a clear need for transparency and reliable sources to build trust in AI tools since AI is expected to increase the volume of scholarly and medical research rapidly. So 71% expect AI tools’ results to be based on high-quality trusted sources.
Mira Murati, the chief technology officer of OpenAI, also acknowledged the same risks and concerns which lead to biases in LLM-based products in a recent interview.