Generative AI News, Stories and Latest Updates Artificial Intelligence, And Its Commercial, Social And Political Impact Tue, 03 Sep 2024 09:43:26 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg Generative AI News, Stories and Latest Updates 32 32 Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies https://analyticsindiamag.com/ai-origins-evolution/channel-specific-and-product-centric-genai-implementation-in-enterprises-leads-to-data-silos-and-inefficiencies/ Tue, 03 Sep 2024 07:41:20 +0000 https://analyticsindiamag.com/?p=10134300

Pega employs ‘situational layer cake’, which, as a part of its exclusive centre-out architecture, helps adapt microjourneys for different customer types, lines of business, geographies, and more.

The post Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies appeared first on AIM.

]]>

Organisations often struggle with data silos and inefficiencies when implementing generative AI solutions. This affects over 70% of enterprises today, but global software company Pegasystems, aka Pega, seems to have cracked the code by using its patented ‘situational layer cake’ architecture. 

This approach democratises the use of generative AI across its platform, allowing clients to seamlessly integrate AI into their processes. They can choose from any LLM service provider, including OpenAI, Google’s Vertex AI, and Azure OpenAI Services, thereby ensuring consistent and efficient AI deployment across all business units.

“Our GenAI implementation at the rule type levels allows us to democratise the use of LLMs across the platform for any use case and by mere configuration, our clients can use any LLM service provider of their choice,” said Deepak Visweswaraiah, vice president, platform engineering and side managing director at Pegasystems, in an interaction with AIM

Pega vs the World 

Recently, Salesforce announced the launch of two new generative AI agents, Einstein SDR Agent and Einstein Sales Coach Agent, which autonomously engage leads and provide personalised coaching. This move aligns with Salesforce’s strategy to integrate AI into its Einstein 1 Agentforce Platform, enabling companies like Accenture to scale deal management and focus on complex sales.

Salesforce integrates AI across all key offerings through its unified Einstein 1 Platform, which enhances data privacy, security, and operational efficiency via the Einstein Trust Layer. 

“We have generative AI capabilities in sales cloud, service cloud, marketing cloud, commerce cloud, as well as our data cloud product, making it a comprehensive solution for enterprise needs,” said Sridhar H, senior director of solution engineering at Salesforce.

SAP’s generative AI strategy, on the other hand, centres around integrating AI into core business processes through strategic partnerships, ethical AI principles, and enhancing its Business Technology Platform (BTP) to drive relevance, reliability, and responsible AI use across industries.

“We are adding a generative AI layer to our Business Technology Platform to address data protection concerns and enhance data security,” stated Sindhu Gangadharan, senior VP and MD of SAP Labs, underscoring the company’s focus on integrating AI with a strong emphasis on security and business process improvement.

Oracle, on the other hand, focuses on leveraging its second-generation cloud infrastructure, Oracle Cloud Infrastructure (OCI). It is designed with a unique, non-blocking network architecture to support AI workloads with enhanced data privacy while extending its data capabilities across multiple cloud providers.

“We’re helping customers do training inference and RAG in isolation and privacy so that you can now bring corporate sensitive, private data…without impacting any privacy issue,” said Christopher G Chelliah, senior vice president, technology & customer strategy, JAPAC at Oracle.

Meanwhile, IBM has watsonx.ai, an AI and data platform designed to help companies integrate, train, and deploy AI models across various business applications.

IBM’s generative AI strategy with watsonx.ai differentiates itself by offering extensive model flexibility, including IBM-developed (Granite), open-source (Llama 3 and alike), and third-party models, along with robust client protection and hybrid multi-cloud deployment options. At the same time, Pega focuses on deeply integrating AI within its platform to streamline business processes and eliminate data silos through its unique situational layer cake architecture.

Pega told AIM that it distinguishes itself from its competitors by avoiding the limitations of the traditional technological approaches, which often lead to redundant implementations and data silos. “In contrast, competitors might also focus more on channel-specific designs or product-centric implementations, which can lead to inefficiencies and fragmented data views across systems,” said Visweswaraiah. 

Situational Layer Cake Architecture 

Pega told AIM that its approach to integrating GenAI processes into business operations is distinct due to its focus on augmenting business logic and decision engines rather than generating code for development. 

It employs the situational layer cake architecture, which as a part of Pega’s exclusive centre-out architecture, helps to adapt microjourneys for different customer types, lines of business, geographies, and more. 

“Our patented situational layer cake architecture works in layers making specialising a cinch, differentiating doable, and applying robust applications to any situation at any time, at any scale,” said  Visweswaraiah.

He added that enterprises can start with small, quick projects that can grow and expand over time, ensuring they are adaptable and ready for future challenges.

In addition to this, the team said it has the ‘Pega Infinity’ platform, which can mirror any organisation’s business by capturing the critical business dimensions within its patented situational layer cake. 

“Everything we build in the Pega platform, processes, rules, data models, and UI is organised into layers within the situational layer cake. This means that you can roll out new products, regions, or channels without copying or rewriting your application,” shared  Visweswaraiah. 

He further said that the situational layer cake lets you declare what is different and only what is different into layers that match each dimension of your business. 

Simply put, when a user executes the application, the Pega platform slices through the situational layer cake and automatically assembles an experience that is tailored exactly to that user’s context. 

Visweswaraiah believes that this architecture has given them a great opportunity to integrate GenAI into the platform at the right layers so it is available across the platform. 

The post Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies appeared first on AIM.

]]>
The Operationalisation of GenAI https://analyticsindiamag.com/ai-highlights/the-operationalisation-of-genai/ Mon, 02 Sep 2024 08:28:25 +0000 https://analyticsindiamag.com/?p=10134232

Organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

The post The Operationalisation of GenAI appeared first on AIM.

]]>

The operationalisation of GenAI is becoming significant across various industries. Vinoj Radhakrishnan and Smriti Sharma, Principal Consultants, Financial Services at Fractal, shared insights into this transformative journey, shedding light on how GenAI is being integrated into organisational frameworks, particularly in the banking sector, addressing scepticism and challenges along the way.

“GenAI has undergone an expedited evolution in the last 2 years. Organisations are moving towards reaping the benefits that GenAI can bring to their eco system, including in the banking sector. Initial scepticism surrounding confidentiality and privacy has diminished with more available information on these aspects,” said Sharma. 

She noted that many organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

Radhakrishnan added, “Organisations are now more open to exploring various use cases within their internal structures, especially those in the internal operations space that are not customer facing. 

“This internal focus allows for exploring GenAI’s potential without the regulatory scrutiny and reputational risk that customer facing applications might invite. Key areas like conversational BI, knowledge management, and KYC ops are seeing substantial investment and interest.”

Challenges in Operationalisation

Operationalising GenAI involves scaling applications, which introduces complexities. “When we talk about scaling, it’s not about two or three POC use cases; it’s about numerous use cases to be set up at scale with all data pipelines in place,” Sharma explained. 

“Ensuring performance, accuracy, and reliability at scale remains a challenge. Organisations are still figuring out the best frameworks to implement these solutions effectively,” she said.

Radhakrishnan emphasised the importance of backend development, data ingestion processes, and user feedback mechanisms. 

“Operationalising GenAI at scale requires robust backend to frontend API links and contextualised responses. Moreover, adoption rates play a crucial role. If only a fraction of employees uses the new system and provide feedback, the initiative can be deemed a failure,” he said.

The Shift in Industry Perspective

The industry has seen a paradigm shift from questioning the need for GenAI to actively showing intent for agile implementations. “However,” Sharma pointed out, “only a small percentage of organisations, especially in banking, have a proper framework to measure the impact of GenAI. Defining KPIs and assessing the success of GenAI implementations remain critical yet challenging tasks.”

The landscape is evolving rapidly. From data storage to LLM updates, continuous improvements are necessary. Traditional models had a certain refresh frequency, but GenAI requires a more dynamic approach due to the ever-changing environment.

Addressing employee adoption, Radhakrishnan stated, “The fear that AI will take away jobs is largely behind us. Most organisations view GenAI as an enabler rather than a replacement. The design and engineering principles we adopt should focus on seamless integration into employees’ workflows.”

Sharma illustrates with an example, “We are encouraged to use tools like Microsoft Copilot, but the adoption depends on how seamlessly these tools integrate into our daily tasks. Employees who find them cumbersome are less likely to use them, regardless of their potential benefits.”

Data Privacy and Security

Data privacy and security are paramount in GenAI implementations, especially in sensitive sectors like banking. 

Radhakrishnan explained, “Most GenAI use cases in banks are not customer-facing, minimising the risk of exposing confidential data. However, there are stringent guardrails and updated algorithms for use cases involving sensitive information to ensure data protection.”

Radhakrishnan explained that cloud providers like Microsoft and AWS offer robust security measures. For on-premises implementations, organisations need to establish specific rules to compartmentalise data access. 

“Proprietary data also requires special handling, often involving masking or encryption before it leaves the organisation’s environment,” Sharma added.

Best Practices for Performance Monitoring

Maintaining the performance of GenAI solutions involves continuous integration and continuous deployment (CI/CD). 

“LLMOps frameworks are being developed to automate these processes,” Radhakrishnan noted. “Ensuring consistent performance and accuracy, especially in handling unstructured data, is crucial. Defining a ‘golden dataset’ for accuracy measurement, though complex, is essential.”

Sharma added that the framework for monitoring and measuring GenAI performance is still developing. Accuracy involves addressing hallucinations and ensuring data quality. Proper data management is fundamental to achieving reliable outputs.

CI/CD play a critical role in the operationalisation of GenAI solutions. “The CI/CD framework ensures that as underlying algorithms and data evolve, the models and frameworks are continuously improved and deployed,” Radhakrishnan explained. “This is vital for maintaining scalable and efficient applications.”

CI/CD frameworks help monitor performance and address anomalies promptly. As GenAI applications scale, these frameworks become increasingly important for maintaining accuracy and cost-efficiency.

Measuring ROI is Not So Easy

Measuring the ROI of GenAI implementations is complex. “ROI in GenAI is not immediately apparent,” Sharma stated. “It’s a long-term investment, similar to moving data to the cloud. The benefits, such as significant time savings and reduction in fines due to accurate information dissemination, manifest over time.”

Radhakrishnan said, “Assigning a monetary value to saved person-hours or reduced fines can provide a tangible measure of ROI. However, the true value lies in the enhanced efficiency and accuracy that GenAI brings to organisational processes.”

“We know the small wins—saving half a day here, improving efficiency there—but quantifying these benefits across organisations is challenging. At present, only a small portion of banks have even started the journey on a roadmap for that,” added Sharma.

Sharma explained that investment in GenAI is booming, but there is a paradox. “If you go to any quarterly earnings call, everybody will say we are investing X number of dollars in GenAI. Very good. But on the ground, everything is a POC (proof of concept), and everything seems successful at POC stage. The real challenge comes after that, when a successful POC needs to be deployed at production level. There are very few organisations scaling from POC to production as of now. One of the key reasons for that is uneasiness on the returns from such an exercise – taking us back to the point on ROI.”

“Operational scaling is critical,” Radhakrishnan noted. “Normally, when you do a POC, you have a good sample, and you test the solution’s value. But when it comes to operational scaling, many aspects come into play. It must be faster, more accurate, and cost-effective.” 

Deploying and scaling the solution shouldn’t involve enormous investments. The solution must be resilient, with the right infrastructure. When organisations move from POC to scalable solutions, they often face trade-offs in terms of speed, cost, and continuous maintenance.

The Human Element

Human judgement and GenAI must work in harmony. “There must be synergy between the human and what GenAI suggests. For example, in an investment scenario, despite accurate responses from GenAI, the human in the loop might disagree based on their gut feeling or client knowledge,” said Radhakrishnan.

This additional angle is valuable and needs to be incorporated into the GenAI algorithm’s context. A clash between human judgement and algorithmic suggestions can lead to breakdowns, especially in banking, where a single mistake can result in hefty fines.

Data accuracy is obviously crucial, especially for banks that rely heavily on on-premises solutions to secure customer data. 

“Data accuracy is paramount, and most banks are still on-premises to secure customer data. This creates resistance to moving to the cloud. However, open-source LLMs can be fine-tuned for on-premises use, although initial investments are higher,” added Sharma.

The trade-off is between accuracy and contextualisation. Fine-tuning open-source models is often better than relying solely on larger, generic models.

Radhakrishnan and Sharma both noted that the future of GenAI in banking is moving towards a multi-LLM setup and small language models. “We are moving towards a multi-LLM setup where no one wants to depend on a single LLM for cost-effectiveness and accuracy,” said Sharma. 

Another trend she predicted is the development of small language models specific to domains like banking, which handle nuances and jargon better than generalised models.

Moreover, increased regulatory scrutiny is on the horizon. “There’s going to be a lot more regulatory scrutiny, if not outright regulation, on GenAI,” predicted Radhakrishnan.

“Additionally, organisations currently implementing GenAI will soon need to start showing returns. There’s no clear KPI to measure GenAI’s impact yet, but this will become crucial,” he added.

“All the cool stuff, especially in AI, will only remain cool if the data is sound. The more GenAI gathers steam, the more data tends to lose attention,” said Sharma, adding that data is the foundation, and without fixing it, no benefits from GenAI can be realised. “Banks, with their fragmented data, need to consolidate and reign-in this space to reap any benefits from GenAI,” she concluded.

The post The Operationalisation of GenAI appeared first on AIM.

]]>
Can Gen AI Reduce the Technical Debt of Supply Chain Platforms https://analyticsindiamag.com/ai-highlights/can-gen-ai-reduce-the-technical-debt-of-supply-chain-platforms/ Fri, 23 Aug 2024 13:03:46 +0000 https://analyticsindiamag.com/?p=10133651

Madhumita Banerjee, sheds light on how technical debt accumulates for Enterprises, and how generative AI can play a pivotal role in addressing these challenges.

The post Can Gen AI Reduce the Technical Debt of Supply Chain Platforms appeared first on AIM.

]]>

Technical debt, like financial debt, is a concept in information technology where shortcuts, quick fixes or immature solutions used to meet immediate needs burden enterprises with future costs. This debt can significantly impact supply chain efficiency, especially as businesses face the pressures of staying competitive and agile in a post-pandemic world. 

Madhumita Banerjee, Associate manager, Supply chain and manufacturing at Tredence, sheds light on how technical debt accumulates for Enterprises, and how generative AI can play a pivotal role in addressing these challenges.

Banerjee explained that in the context of supply chains, technical debt accumulates when outdated systems, fragmented processes, and manual, siloed workflows are used. Over time, these inefficiencies lead to increased operational costs, reduced responsiveness, and heightened exposure to risks, making it harder for companies to remain competitive.

One of the primary contributors to technical debt, according to Banerjee, is the reliance on legacy systems. “Many supply chains rely on outdated systems that are difficult to integrate with modern technologies, leading to increased maintenance costs and inefficiencies,” she noted. 

These legacy systems, coupled with data silos where information is stored in disparate systems, create significant barriers to seamless information flow, which is critical for supply chain efficiency.

Manual processes also play a role in accumulating technical debt. Tasks requiring human intervention are prone to errors and delays, contributing to inefficiencies and higher operational costs. 

As companies transitioned to digitalization, the rushed adoption of custom solutions and cloud migrations—often driven by the need to keep pace with technological advancements—introduced added complexity and heightened system maintenance burdens. Generative AI emerges as a pivotal new factor in this scenario. Although early adopters face new risks and the possibility of future debt with each generative AI deployment, the technology shows significant promise in addressing these challenges

The Role of Generative AI in Addressing Technical Debt

Banerjee emphasised that while analytics has historically helped connect data and enhance visibility, the emergence of generative AI, especially LLMs, marked a significant shift. 

“Conversational AI and LLM-powered agents make it easier for functional partners—both technical and non-technical—to understand and act on complex data,” she explained. This is especially crucial in supply chains, where not all stakeholders, such as warehouse partners and freight workers, are tech-savvy.

One of the most significant advantages of generative AI in supply chain management is its ability to enhance data integration and visibility. For instance, in order processing, which traditionally involves many manual steps prone to errors, generative AI can automate the entire workflow—from order intake and validation to order confirmation—ensuring seamless communication across departments and reducing the need for manual intervention.

Generative AI also holds promise in optimising decision-making processes within supply chain platforms. However, Banerjee noted that the effectiveness of generative AI in this area depends on the maturity of the supply chain itself. 

“For instance, if we have an LLM-powered event listener that detects market sentiments and links this information to the forecast engine, it can significantly narrow down the information demand planners need,” she said. 

This level of optimisation requires a robust and connected data model where all data parts communicate effectively, enabling real-time insights and more accurate demand forecasts.

Predictive Analytics, Real-time Data Processing, and Compliance

Banerjee said that predictive analytics is another area where generative AI can revolutionise supply chain management. She recalled the evolution from traditional “what-if” analyses to more advanced machine learning algorithms that predict outcomes over time. 

However, she pointed out that decision-making has now evolved to require not just predictions but also a deeper understanding of cause-and-effect relationships. “With GenAI, we can weave in causal discovery algorithms that translate complex data into actionable insights presented in simple English for all stakeholders to understand,” she added.

This capability is particularly valuable in areas like inventory forecasting, where understanding the root causes of forecast errors and deviations can lead to more accurate and reliable predictions. By translating these insights into easily digestible information, generative AI empowers supply chain managers to make more informed decisions, ultimately improving efficiency and reducing costs.

Speaking about real-time data processing being critical for the effectiveness of generative AI, Banerjee clarified that it’s not the AI that contributes to real-time processing but the other way around. 

“We need to have real-time data to make sure we can analyse scenarios and use generative AI to its maximum potential,” she explained. For instance, ensuring that data entered into legacy systems is immediately available on the cloud allows LLMs to process and convert this data into actionable insights without delay.

In terms of compliance and risk management, generative AI can bolster efforts by removing manual interventions. Banerjee highlighted procurement and transportation as a key area where GenAI can enhance compliance. In transportation, where contracts are reviewed annually, GenAI-powered systems can query specific contracts, compare terms, and ensure adherence to key metrics like freight utilisation and carrier compliance.

Challenges and Future Outlook

Although generative AI offers numerous benefits, challenges still persist. Banerjee stressed the importance of properly vetting the fitment and maturity of Gen AI strategy. “Embarking on the GenAI journey may appear simple, but without a thorough assessment of need and fitment, along with strong investments in data quality, integration, and governance, companies are likely to deepen their technical debt.”, she added

One of the most significant concerns is the issue of “hallucination”, where AI models generate incorrect or misleading information. and validating the data on which AI models are trained to avoid garbage in-garbage out scenarios.

In summary, Banerjee ties the discussion back to the central theme of technical debt. By addressing the key contributors to technical debt—legacy systems, data silos, and manual processes—generative AI can help reduce future costs and risks, enabling companies to pursue digital initiatives with greater confidence. 

“If we can successfully integrate GenAI into our systems, we can revolutionise the entire supply chain platform, making it more efficient, responsive, and competitive”, she concluded.

The post Can Gen AI Reduce the Technical Debt of Supply Chain Platforms appeared first on AIM.

]]>
Toss a Stone in Bangalore and It will Land on a Generative AI Leader https://analyticsindiamag.com/ai-origins-evolution/toss-a-stone-in-bangalore-and-it-will-land-on-a-generative-ai-leader/ Fri, 23 Aug 2024 11:18:49 +0000 https://analyticsindiamag.com/?p=10133632

But then not everyone can be a good GenAI leader.

The post Toss a Stone in Bangalore and It will Land on a Generative AI Leader appeared first on AIM.

]]>

Are software engineering roles disappearing? Not exactly. The apparent decline in job postings might just be a shift in titles—think ‘Generative AI Leader’ instead of ‘Software Engineer’. And now, as if on cue, everyone has become a generative AI expert and leader. Not just Silicon Valley, it’s the same in Bengaluru too. 

Vaibhav Kumar, senior director of AI/ML at AdaSci, pointed out this phenomenon in a LinkedIn post. “In Bangalore these days, if you randomly toss a stone, odds are it will land on a Generative AI Leader—earlier, they used to be software engineers, but now they seem to be on the brink of extinction.”

It is true that there are a slew of new jobs that are being created because of generative AI such as AI entrepreneur, chief AI officer, AI ethicist, AI consultant, and at least 50 more. But it has also given rise to people who simply call themselves ‘generative AI leaders’.

Everyone’s an AI Engineer

Kumar’s point is that now everyone is calling themselves an expert in AI as the barrier to entry is significantly lower. But then not everyone can be a good GenAI leader. Vishnu Ramesh, the founder of Subtl.ai, said that the only way to find a good generative AI leader is to ask them how many generative AI projects they have driven and whether these projects actually benefited the organisation. 

“The number of chatbots built will soon overtake Bangalore traffic,” Shashank Hegde, data science and ML manager at HP, said in jest, implying that with every company experimenting with generative AI in use cases, most of them are coming up with chatbots on their systems, which, honestly, people are not very fond of. 

Ramesh and Hegde’s points found takers. More engineers in the discussion described how their team’s ‘generative AI leaders’ were not able to not perform basic tasks of machine learning, and were mostly experts on data science, not generative AI really. “A Rs499 course from Udemy or Coursera is delivering GenAI leaders very rapidly,” commented Chetan Badhe.

AIM had earlier reported that fancy courses from new organisations are creating jobs for the future, but also causing freshers to search for jobs that are not there in the market. “GenAI leaders don’t know what’s bidirectional in BERT,” added another user.

Meanwhile, a recent IBM study found out that around 49% of Indian CEOs surveyed said they were hiring for GenAI roles that didn’t exist last year. Also, 58% of Indian CEO respondents say they are pushing their organisation to adopt generative AI more quickly than some people are comfortable with. 

This makes it clear that generative AI is the leading focus for companies, although for some, it’s more about appearances than substance. 

Moreover, getting ‘generative AI’ on your profile can also boost your salary by up to 50%. AIM Research noted that the median salaries of generative AI developers and engineers ranged between INR 11.1 lakh and 12.5 lakh per annum.

It just makes sense to call yourself a generative AI leader if you are already working in the software engineering field. But getting upskilled in the field to be credible is also important. 

Bengaluru is All About AI

Just like everyone is “thrilled” on LinkedIn, everyone is doing something in AI in Bengaluru. Someone correctly pointed out: If LinkedIn was a city, it would definitely be Bengaluru. The city’s AI hub, HSR Layout, was recently looking for a chief everything officer, someone who can perform a plethora of tasks all alone. 

And it is indeed true that most of the software engineering is becoming all about generative AI because of the trend and hype. Earlier, Bangalore was filled with software engineers from IT industries and startups; now they have slowly turned to generative AI leaders. Some are even influencers on X or LinkedIn. 

At the same time, Bengaluru’s AI culture is also giving rise to 10x engineers, who are able to do the task of 10 people using generative AI. Some even argue that there is no need for a computer science degree anymore to get into AI. It is definitely time to rewrite your resume and say you are a ‘generative AI leader’.

The post Toss a Stone in Bangalore and It will Land on a Generative AI Leader appeared first on AIM.

]]>
Tech Mahindra Partners with Google Cloud to Accelerate Generative AI Adoption https://analyticsindiamag.com/ai-news-updates/tech-mahindra-partners-with-google-cloud-to-accelerate-generative-ai-adoption/ Thu, 22 Aug 2024 08:53:34 +0000 https://analyticsindiamag.com/?p=10133495

Tech Mahindra, a global leader in technology consulting and digital solutions, has announced a strategic partnership with Google Cloud aimed at accelerating the adoption of generative AI and driving digital transformation across Mahindra & Mahindra (M&M) entities.  This collaboration seeks to leverage cutting-edge AI and ML to enhance various aspects of engineering, supply chain, pre-sales, […]

The post Tech Mahindra Partners with Google Cloud to Accelerate Generative AI Adoption appeared first on AIM.

]]>

Tech Mahindra, a global leader in technology consulting and digital solutions, has announced a strategic partnership with Google Cloud aimed at accelerating the adoption of generative AI and driving digital transformation across Mahindra & Mahindra (M&M) entities. 

This collaboration seeks to leverage cutting-edge AI and ML to enhance various aspects of engineering, supply chain, pre-sales, and after-sales services for M&M, one of India’s leading industrial enterprises.

As part of the partnership, Tech Mahindra will spearhead the cloud transformation and digitisation of M&M’s workspace, deploying the company’s data platform on Google Cloud. This effort is expected to revolutionise M&M’s operations by integrating advanced AI-powered applications into critical business areas. 

Notably, Google Cloud’s AI technologies will be utilised to detect anomalies during the manufacturing process, ensuring zero breakdowns, optimising energy efficiency, enhancing vehicle safety, and ultimately improving the overall customer experience.

Bikram Singh Bedi, vice president and country MD at Google Cloud, emphasised the importance of this collaboration, saying, “Google Cloud is committed to providing companies like M&M with our trusted, secure cloud infrastructure, and advanced AI tools. Our partnership with M&M will help enable a significant cloud and AI transformation for its enterprise and its global customers.”

The partnership will also see Tech Mahindra managing various enterprise applications and workloads for simulators, leveraging its expertise in analytics and cloud migration. This strategic move promises significant value to M&M’s global customer base, aligning with Tech Mahindra’s ongoing efforts to enhance productivity through gen AI tools.

Rucha Nanavati, Chief Information Officer at Mahindra Group, said, “Google Cloud is committed to providing companies like M&M with our trusted, secure cloud infrastructure, and advanced AI tools. Our partnership with M&M will help enable a significant cloud and AI transformation for its enterprise and its global customers.”

Tech Mahindra and Big Cloud Partnerships

Tech Mahindra has been continuously partnering with big tech cloud providers to leverage their generative AI applications on their platforms. Recently, the company partnered with Microsoft to use dedicated Copilot tools to transform their workplace. 

Similarly, the company has also partnered with Yellow.ai for enhancing HR and customer service automation solutions. 

The post Tech Mahindra Partners with Google Cloud to Accelerate Generative AI Adoption appeared first on AIM.

]]>
Gen AI Startup Landscape in India 2024 https://analyticsindiamag.com/ai-highlights/gen-ai-startup-landscape-in-india-2024/ https://analyticsindiamag.com/ai-highlights/gen-ai-startup-landscape-in-india-2024/#respond Wed, 07 Aug 2024 15:29:28 +0000 https://analyticsindiamag.com/?p=10131736

The "Gen AI Startups in India 2024" report showcases the explosive growth of Gen AI startups, driven by venture capital and tech talent in hubs like Bengaluru and Delhi-NCR. It highlights key players like Krutrim and Sarvam AI, the surge in early-stage funding, and India’s rising influence in the global Gen AI landscape.

The post Gen AI Startup Landscape in India 2024 appeared first on AIM.

]]>

Generative Artificial Intelligence (Gen AI) is revolutionizing content creation by enabling machines to generate text, images, music, and videos using machine learning models that analyze and replicate patterns from human-created data. This technology is enhancing customer interactions, automating repetitive tasks like RFP responses and multilingual marketing, and exploring unstructured data through conversational interfaces.

From 2021 to 2024, India has witnessed a significant increase in Gen AI startups, driven by venture capital investments, a skilled workforce, and the concentration of startups in tech hubs like Bengaluru, Mumbai, and Delhi-NCR. Other Indian cities like Hyderabad are emerging as a key player, contributing 7.2% of Gen AI startups, supported by its strong IT and pharmaceutical sectors. Chennai and Ahmedabad are also growing as tech hubs, with Chennai focusing on SaaS and Ahmedabad benefiting from institutions like IIM Ahmedabad.

Funding for Gen AI startups has surged, particularly in early stages, with seed funding accounting for 54% of the rounds. This financial backing is crucial for driving innovation and scaling operations. Notable startups like Krutrim and Sarvam AI are leading the sector, leveraging their funding to advance AI models and expand their market presence.

As of now, India hosts around 150 Gen AI startups, a growth fueled by increasing funding and government initiatives like the National AI Strategy and Startup India program. With continued innovation and expansion, India is set to become a global leader in generative AI.



Key Highlights:

  • The global Gen AI market is projected to grow from USD 39.2 billion in 2023 to USD 1082.4 billion by 2033. The market is anticipated to expand at a CAGR of 40.22% from 2024 to 2033.
  • There are over 150 Gen AI startups currently operating in India.
  • 51.4% of India’s Gen AI startups are based in Bengaluru, positioning the city as a leading hub for Gen AI startups in the country.
  • Gen AI startups such as Krutrim AI, Sarvam AI, EMA, Neysa Networks, Raga AI have all received substantial funding in 2024.
  • In 2024, Krutrim AI raised USD 50 million in funding.
  • 54.0% of AI startups obtained funding in their seed round.


Read the complete report here:

The post Gen AI Startup Landscape in India 2024 appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-highlights/gen-ai-startup-landscape-in-india-2024/feed/ 0
Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer https://analyticsindiamag.com/intellectual-ai-discussions/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/ https://analyticsindiamag.com/intellectual-ai-discussions/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/#respond Fri, 02 Aug 2024 06:30:00 +0000 https://analyticsindiamag.com/?p=10131210

"AI is going to be commoditised; everybody will have access to the tools. What will remain crucial is the talent pool you have – the storytellers."

The post Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer appeared first on AIM.

]]>

Kuku FM, a popular audio content platform backed by Google and Nandan Nilekani’s Fundamentum Partnership, is harnessing the power of generative AI to revolutionise how stories are created, produced, and consumed. This transformation is spearheaded by Kunj Sanghvi, the VP of content at Kuku FM, who told AIM that generative AI is part of their everyday work and content creation.

“On the generative AI side, we are working pretty much on every layer of the process involved,” Sanghvi explained. “Right from adapting stories in the Indian context, to writing the script and dialogues, we are trying out AI to do all of these. Now, in different languages, we are at different levels of success, but in English, our entire process has moved to AI.”

Kuku FM is leveraging AI not just for content creation but for voice production as well. The company uses Eleven Labs, ChatGPT APIs, and other available offerings to produce voices directly.

“Dramatic voice is a particularly specific and difficult challenge, and long-form voice is also a difficult challenge. These are two things that most platforms working in this space haven’t been able to solve,” Sanghvi noted. 

In terms of long-form content moving to generative AI, Kuku FM also does thumbnail generation, visual assets generation, and description generation and Sanghvi said that the team has custom GPTs for every process.

Compensating Artists

AI is playing a crucial role in ensuring high-quality outputs across various languages and formats. “In languages like Hindi and Tamil, the quality is decent, but for others like Telugu, Kannada, Malayalam, Bangla, and Marathi, the output quality is still poor,” said Sanghvi. 

However, the quality improves every week. “We put out a few episodes even in languages where we’re not happy with the quality to keep experimenting and improving,” Sanghvi added.

Beyond content creation, AI is helping Kuku FM in comprehensively generating and analysing metadata. “We have used AI to generate over 500 types of metadata on each of our content. AI itself identifies these attributes, and at an aggregate level, we can understand what makes certain content perform better than others,” he mentioned.

One of the most transformative aspects of Kuku FM’s use of AI is its impact on creators. The platform is in the process of empowering 5,000 creators to become full-stack creative producers. 

“As the generative AI tools become better, every individual is going to become a full-stack creator. They can make choices on the visuals, sounds, language, and copy, using AI as a co-pilot,” Sanghvi said. “We are training people to become creative producers who can own their content from start to end.”

When asked about the competitive landscape such as Amazon’s Audible or PocketFM, and future plans, Sanghvi emphasised that AI should not be viewed as a moat but as a platform. “Every company of our size, not just our immediate competition, will use AI as a great enabler. AI is going to be commoditised; everybody will have access to the tools. What will remain crucial is the talent pool you have – the storytellers,” he explained.

Everyone’s a Storyteller with AI

In a unique experiment blending generative AI tools, former OpenAI co-founder Andrej Karpathy used the Wall Street Journal’s front page to produce a music video on August 1, 2024. 

Karpathy copied the entire front page of the newspaper into Claude, which generated multiple scenes and provided visual descriptions for each. These descriptions were then fed into Ideogram AI, an image-generation tool, to create corresponding visuals. Next, the generated images were uploaded into RunwayML’s Gen 3 Alpha to make a 10-second video segment.

Sanghvi also touched upon the possibility of edge applications of AI, like generating audiobooks in one’s voice. “These are nice bells and whistles but are not scalable applications of AI. However, they can dial up engagement as fresh experiments,” he said.

Kuku FM is also venturing into new formats like video and comics, generated entirely through AI. He said that the team is not going for shoots or designing characters in studios. “Our in-house team works with AI to create unique content for video, tunes, and comics,” he revealed.

Sanghvi believes that Kuku FM is turning blockbuster storytelling into a science, making it more accessible and understandable. “The insights and structure of a story can now look like the structure of a product flow, thanks to AI,” Sanghvi remarked. 

“This democratises storytelling, making every individual a potential storyteller.” As Sanghvi aptly puts it, “The only job that will remain is that of a creative producer, finding fresh ways to engage audiences, as AI will always be biassed towards the past.”

The post Kuku FM is Using Generative AI to Make Everyone a Full-Stack Creative Producer appeared first on AIM.

]]>
https://analyticsindiamag.com/intellectual-ai-discussions/kuku-fm-is-using-generative-ai-to-make-everyone-a-full-stack-creative-producer/feed/ 0
GenAI Is NOT a Bubble, It’s a Tree  https://analyticsindiamag.com/ai-origins-evolution/genai-is-not-a-bubble-its-a-tree/ https://analyticsindiamag.com/ai-origins-evolution/genai-is-not-a-bubble-its-a-tree/#respond Thu, 01 Aug 2024 05:57:52 +0000 https://analyticsindiamag.com/?p=10131093

And its branching...

The post GenAI Is NOT a Bubble, It’s a Tree  appeared first on AIM.

]]>

Many believe that the rush to adopt generative AI may soon lead to a bubble burst. OpenAI, creator of ChatGPT, faces high operating costs and insufficient revenue, potentially leading to losses of up to $5 billion in 2024 and risking bankruptcy within a year.

OpenAI is expected to spend nearly $4 billion this year on Microsoft’s servers and almost $3 billion on training its models. With its workforce of around 1,500 employees, expenses could reach up to $1.5 billion. In total, operational costs may hit $8.5 billion, while revenue stands at only $3.4 billion.

However, some believe otherwise. “As long as Sam Altman is CEO of OpenAI, OpenAI will never go bankrupt. He will continue to drop mind-blowing demos and feature previews, and raise billions. I am not being sarcastic, it’s the truth,” posted AI influencer Ashutosh Shrivastava on X. 

He added that with products like Sora, the Voice Engine, GPT-4’s voice feature, and now SearchGPT, anyone who thinks OpenAI will go bankrupt is simply underestimating Altman.

As OpenAI prepares to seek more funding in the future, it’s essential for Altman to create more bubbles of hype. Without this, the industry risks underestimating the full impact of generative AI. 

Chinese investor and serial entrepreneur Kai-Fu Lee is bullish about OpenAI becoming a trillion-dollar company in the next two to three years. “OpenAI will likely be a trillion-dollar company in the not-too-distant future,” said Lee recently. 

​​On the contrary, analysts and investors from major financial institutions like Goldman Sachs, Sequoia Capita, Moody’s, and Barclays have released reports expressing concerns about the profitability of the substantial investments in generative AI.

Sequoia Capital partner David Cahn’s recent blog, “600 Billion Question,” points out the gap between AI infrastructure spending and revenue. He suggests the industry needs to generate around $600 billion annually to cover investment costs and achieve profitability.

Early Signs of an AI Bubble? 

Microsoft shares fell 7% on Tuesday as the tech giant reported lower-than-expected revenue. Revenue from its Intelligent Cloud unit, which includes the Azure cloud-computing platform, rose 19% to $28.5 billion in the fourth quarter, missing analysts’ estimates of $28.68 billion.

Despite that, the company announced plans to spend more money this fiscal year to enhance its AI infrastructure, even as growth in its cloud business has slowed, suggesting that the AI payoff will take longer than expected. 

Microsoft CFO Amy Hood explained that the spending is essential to meet the demand for AI services, adding that the company is investing in assets that “will be monetised over 15 years and beyond.” CEO Satya Nadella also said that Azure AI now boasts over 60,000 customers, marking a nearly 60% increase year-on-year, with the average spending per customer also on the rise. 

Last week, Google’s cloud revenue exceeded $10 billion, surpassing estimates for Q2 2024. The company is, however, facing increasing AI infrastructure costs. Google CEO Sundar Pichai insists, “The risk of under-investing far outweighs the risk of over-investing for us.” He warned, “Not investing to stay ahead in AI carries much more significant risks.”

“If you take a look at our AI infrastructure and generative AI solutions for cloud across everything we do, be it compute on the AI side, the products we have through Vertex AI, Gemini for Workspace and Gemini for Google Cloud, etc, we definitely are seeing traction,” Pichai said, elaborating that the company now boasts over two million developers playing around with Gemini on Vertex and AI Studio. 

“AI is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” said Jim Covello, Goldman Sachs’ head of global equity research.

Really? 

Recently, Google DeepMind’s AlphaProof and AlphaGeometry 2 AI models worked together to tackle questions from the International Math Olympiad (IMO). The DeepMind team scored 28 out of 42 – enough for a silver medal but one point short of gold.

Meanwhile, the word on the street is that OpenAI is planning to start a healthcare division focused on developing new drugs using generative AI. Recently, the startup partnered with Moderna to develop mRNA medicines. The company is already working with Whoop, Healthify, and 10BedICU in healthcare. 

https://twitter.com/thekaransinghal/status/1815444769868574944

JPMorgan recently launched its own AI chatbot, LLM Suite, providing 50,000 employees (about 15% of its workforce) in its asset and wealth management division with a platform for writing, idea generation, and document summarisation. This rollout marks one of Wall Street’s largest LLM deployments.

“AI is real. We already have thousands of people working on it, including top scientists around the world like Manuela Veloso from Carnegie Mellon Machine Learning,” said JP Morgan chief Jamie Dimon, adding that AI is already a living, breathing entity.

“It’s going to change, there will be all types of different models, tools, and technologies. But for us, the way to think about it is in every single process—errors, trading, hedging, research, every app, every database—you’re going to be applying AI,” he predicted. “It might be as a copilot, or it might be to replace humans.”

Investor and Sun Microsystems founder Vinod Khosla is betting on generative AI and remains unfazed by the surrounding noise. “These are all fundamentally new platforms. In each of these, every new platform causes a massive explosion in applications,” Khosla said. 

Further, he acknowledged that the rush into AI might lead to a financial bubble where investors could lose money, but emphasised that this doesn’t mean the underlying technology won’t continue to grow and become more important.

Declining Costs

Dario Amodei, CEO of Anthropic, has predicted that training a single AI model, such as GPT-6, could cost $100 billion by 2027. In contrast, a trend is emerging towards developing small language models, more cost-efficient language models that are easier to run without requiring extensive infrastructure. 

OpenAI co-founder Andrej Karpathy recently said that the cost of building an LLM has come down drastically over the past five years due to improvements in compute hardware (H100 GPUs), software (CUDA, cuBLAS, cuDNN, FlashAttention) and data quality (e.g., the FineWeb-Edu dataset).

Abacus.AI chief Bindu Reddy predicted that in the next five years, smaller models will become more efficient, LLMs will continue to become cheaper to train, and LLM inference will become widespread. “We should expect to see several Sonnet 3.5 class models that are 100x smaller and cheaper in the next one to two years.”

The Bigger Picture 

Generative AI isn’t represented by a single bubble like the dot-com era but is manifested in multiple, industry-specific bubbles. For example, generative AI tools for video creation, such as SORA and Runway, demand much more computational power compared to customer care chatbots. Despite these variations, generative AI is undeniably a technology with lasting impact and is here to stay.

“I think people are using ‘bubble’ too lightly and without much thought, as they have become accustomed to how impressive ChatGPT or similar tools are and are no longer impressed. They are totally ignoring trillion-dollar companies emerging with countless new opportunities. Not everything that grows is a bubble, and we should stop calling AI a bubble or a trend. It is a new way of doing things, like the internet or smartphones,” posted a user on Reddit. 

“AI is more like…a tree. It took a long time to germinate, sprouted in 2016, became something worth planting in 2022, and is now digging its roots firmly in. Is the tree bubble over now? Heh. Just like a tree, AI’s impact and value will keep growing and evolving. It’s not a bubble; it’s more like an ecosystem,” said another user on Reddit. 

The Bubblegum effect: The issue today is that investors are using OpenAI and NVIDIA as benchmarks for the AI industry, which may not be sustainable in the long term. While NVIDIA has had significant success with its H100s and B200s, it cannot afford to become complacent. 

The company must continually innovate to reduce training costs and maintain its edge. This concern is evident in NVIDIA chief Jensen Huang’s anxiety about the company’s future.

“I am paranoid about going out of business. Every day I wake up in a sweat, thinking about how things could go wrong,” said Huang. 

He further explained that in the hardware industry, planning two years in advance is essential due to the time required for chip fabrication. “You need to have the architecture ready. A mistake in one generation of architecture could set you back by two years compared to your competitor,” he said.

NVIDIA’s success should not be taken for granted, even with the upcoming release of its latest GPU, Blackwell. Alternatives to NVIDIA are increasingly available, particularly for inference tasks, including Google TPUs and Groq. Recently, Groq demonstrated impressive inference speed with Llama 3.1, and Apple selected Google TPUs over NVIDIA GPUs for its model training needs.

Most recently, AI hardware company Etched.ai, unveiled its chip purpose-built just to run transformer models. Etched claims that Sohu can process over 500,000 tokens per second with Llama 70B. One 8xSohu server replaces 160 H100s. According to the company, “Sohu is more than ten times faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.”

Meta recently released Llama 3.1, which is currently competing with GPT-40. Meta chief Mark Zuckerberg is confident that Llama 3.1 will have a similar impact on the AI ecosystem as Linux had on the operating system world. Moreover, Meta also recently launched AI Studio, which allows creators to build and share customisable AI agents.

In contrast, “I hate the AI hype and, at the same time, I think AI is very interesting,” said Linus Torvalds, the creator of the Linux kernel, in a recent conversation with Verizon’s Dirk Hohndel. When asked if AI is going to replace programmers and creators, Torvalds asserted that he doesn’t want to be a part of the AI hype.

 He suggested that we should wait ten years before making broad announcements, such as claiming that jobs will be lost in the next five years. 

Bursting the Bubble 

With AI representing more than just a single bubble, some of these bubbles may burst. Gartner predicts that by the end of 2025, at least 30% of generative AI projects will be abandoned after the proof-of-concept stage due to factors such as poor data quality, inadequate risk control, escalating costs, and unclear business value.

Some start-ups that thrived during the initial AI boom are now encountering difficulties. Inflection AI, founded by ex-Google DeepMind veterans, secured $1.3 billion last year to expand their chatbot business. However, in March, the founders and some key employees moved to Microsoft. Other AI firms, like Stability AI, which developed a popular AI image generator, have faced layoffs. The industry also contends with lawsuits and regulatory challenges.

Meanwhile, Karpathy is confused as to why state of the art LLMs can both perform extremely impressive tasks (e.g. solve complex math problems) while simultaneously struggling with some very dumb problems such as incorrectly determining that 9.11 is larger than 9.9. He calls this “Jagged Intelligence.”

The post GenAI Is NOT a Bubble, It’s a Tree  appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/genai-is-not-a-bubble-its-a-tree/feed/ 0
Ascendion’s Generative AI Revenue Increases 382% in H1 24 https://analyticsindiamag.com/ai-news-updates/ascendion-genai-revenue-increases-382-y-o-y/ https://analyticsindiamag.com/ai-news-updates/ascendion-genai-revenue-increases-382-y-o-y/#respond Wed, 31 Jul 2024 12:07:24 +0000 https://analyticsindiamag.com/?p=10130896

Ascendion has trained more than 1,500 employees in GenAI tools so far

The post Ascendion’s Generative AI Revenue Increases 382% in H1 24 appeared first on AIM.

]]>

Ascendion, a provider of digital engineering services, reported a 382% increase in generative AI revenue in the first half of 2024 compared to the same period in the previous year. The company has successfully completed over 50 generative AI programmes so far in 2024.

The company said it has trained more than 1,500 employees in GenAI tools, and 57% of the workforce is now Gen AI-literate.

“Our commitment to ‘Engineering to the power of AI’ has driven market-leading progress in the first half of 2024. From groundbreaking client impact to significant productivity gains, we’ve set the bar for the industry. GenAI is already driving a significant part of our growth and business, and as we move into the second half of the year, we’re doubling down on innovation and excellence, poised to deliver even greater value and transformation,” said Karthik Krishnamurthy, CEO of Ascendion.

Leveraging Ascendion’s AVA+ platform, a Fortune 50 bank achieved a 50% increase in productivity and a 40% reduction in effort for data extraction and validation processes. Similarly, a major manufacturing client leveraged generative AI for customer service optimisation, resulting in a 20% increase in operational efficiency and a 25% boost in system efficacy.

Earlier this year, Ascendion also unveiled a new AI Studio in Chennai to foster creativity, collaboration, and real-time problem-solving, enabling clients to witness the transformative power of Gen AI firsthand.

The post Ascendion’s Generative AI Revenue Increases 382% in H1 24 appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/ascendion-genai-revenue-increases-382-y-o-y/feed/ 0
Unlocking Opportunities with GenSQL: Leveraging Large Language Models for Structured Data https://analyticsindiamag.com/ai-trends-future/unlocking-opportunities-with-gensql/ https://analyticsindiamag.com/ai-trends-future/unlocking-opportunities-with-gensql/#respond Wed, 31 Jul 2024 09:41:07 +0000 https://analyticsindiamag.com/?p=10130823

In today’s data-driven world, the ability to efficiently query and manipulate structured data is paramount for organizations. GenSQL, a tool that harnesses the power of Large Language Models (LLMs), offers a revolutionary approach to interacting with structured data. This article explores the myriad opportunities that GenSQL presents, highlighting its capabilities with practical examples and referencing […]

The post Unlocking Opportunities with GenSQL: Leveraging Large Language Models for Structured Data appeared first on AIM.

]]>

In today’s data-driven world, the ability to efficiently query and manipulate structured data is paramount for organizations. GenSQL, a tool that harnesses the power of Large Language Models (LLMs), offers a revolutionary approach to interacting with structured data. This article explores the myriad opportunities that GenSQL presents, highlighting its capabilities with practical examples and referencing research from the Massachusetts Institute of Technology1 (MIT).

GenSQL is a revolutionary technology that empowers Large Language Models (LLMs) to interact seamlessly with structured data through natural language. By translating human language queries into precise SQL statements, GenSQL unlocks a world of possibilities for data analysis and exploration.

Understanding GenSQL

At its core, GenSQL is a sophisticated system that interprets natural language queries and converts them into executable SQL code. This process eliminates the need for users to possess in-depth SQL knowledge, making data accessible to a broader audience. GenSQL leverages the power of LLMs to comprehend complex queries, handle ambiguities, and generate accurate SQL statements.

GenSQL is a technology that utilizes Large Language Models to generate SQL queries from natural language inputs. By leveraging the advanced natural language processing capabilities of LLMs, GenSQL enables users to interact with databases in a more intuitive and accessible manner. This bridges the gap between technical and non-technical users, facilitating better data utilization and decision-making.

Opportunities Offered by GenSQL

  1. Democratization of Data: GenSQL empowers users from various backgrounds to interact with data without requiring SQL expertise. This democratization of data fosters data-driven decision-making across organizations.
    Example: A marketing analyst can ask, “What is the sales trend for product A in the last quarter?” and GenSQL will generate the corresponding SQL query to retrieve the necessary data.
  1. Enhanced Data Exploration: Users can explore data intuitively through natural language, uncovering hidden patterns and insights.
    Example: A financial analyst can query, “Show me the top 5 customers by revenue,” and GenSQL will generate the SQL query to visualize the results.
  1. Complex Query Handling: GenSQL can handle intricate queries involving multiple tables, joins, aggregations, and filters.
    Example: A data scientist can ask, “Calculate the average order value for customers who made purchases in both Q1 and Q2, grouped by region,” and GenSQL will generate the appropriate SQL query.
  1. Natural Language Interfaces: GenSQL enables the creation of intuitive natural language interfaces for various applications, including chatbots, virtual assistants, and data visualization tools.
    Example: A customer support chatbot can answer questions like “When was my last order?” using GenSQL to retrieve the relevant data from the database.
  1. Augmented Intelligence: GenSQL can assist analysts in formulating complex queries by suggesting relevant terms and refining search criteria.
    Example: An analyst can ask, “Show me the correlation between customer age and purchase frequency,” and GenSQL can suggest additional variables like “product category” or “location” to enhance the analysis.
  1. Improved Data Governance: GenSQL can be integrated with data governance frameworks to ensure data security and compliance.
    Example: GenSQL can prevent unauthorized access to sensitive data by blocking queries that violate data privacy regulations.

Challenges and Considerations

While GenSQL offers significant advantages, it’s essential to consider the following challenges:

  • Data Quality: The accuracy of GenSQL’s generated SQL queries depends on the quality of the underlying data.
  • Ambiguity: Natural language can be ambiguous, leading to potential misinterpretations of user intent.
  • Performance: Complex queries might require optimization to ensure efficient query execution.

Insights from MIT Research

Research from MIT underscores the transformative potential of integrating LLMs with data querying tools like GenSQL. According to a study by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the use of LLMs in generating SQL queries from natural language inputs can lead to significant improvements in data accessibility and usability. The study highlights that LLMs can understand context and nuances in natural language, making them particularly effective for translating complex queries into accurate SQL statements .

Moreover, MIT researchers have demonstrated that LLMs can learn from vast datasets to recognize patterns and predict the structure of SQL queries based on the context provided by the user. This predictive capability enhances the accuracy of generated queries and reduces the need for manual intervention, thereby increasing efficiency and reducing the likelihood of errors.

Conclusion

GenSQL represents a significant advancement in human-computer interaction with data. By democratizing data access,enhancing data exploration, and enabling complex query handling, GenSQL empowers users to extract maximum value from their data assets. As the technology continues to evolve, we can anticipate even more groundbreaking applications and benefits.

GenSQL represents a significant advancement in the way organizations interact with structured data. By harnessing the capabilities of Large Language Models, GenSQL democratizes data access, enhances productivity, ensures consistency, and accelerates decision-making processes. The natural language interface makes it accessible to a broader audience, fostering a more data-driven culture within organizations. As the technology continues to evolve, the opportunities it offers will only expand, driving further innovation and efficiency in data management and analysis.

By integrating GenSQL into their data workflows, businesses can unlock the full potential of their data assets, gaining valuable insights and maintaining a competitive edge in today’s dynamic marketplace.

  1. MIT Research ↩

The post Unlocking Opportunities with GenSQL: Leveraging Large Language Models for Structured Data appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-trends-future/unlocking-opportunities-with-gensql/feed/ 0
Indian GenAI Startup Devnagri Secures Funding Led by Inflection Point Ventures https://analyticsindiamag.com/ai-news-updates/indian-genai-startup-devnagri-secures-funding-led-by-inflection-point-ventures/ https://analyticsindiamag.com/ai-news-updates/indian-genai-startup-devnagri-secures-funding-led-by-inflection-point-ventures/#respond Fri, 26 Jul 2024 03:44:48 +0000 https://analyticsindiamag.com/?p=10130265

The company was co-founded by Nakul Kundra and Himanshu Sharma, who collectively bring over 15 years of entrepreneurial experience.

The post Indian GenAI Startup Devnagri Secures Funding Led by Inflection Point Ventures appeared first on AIM.

]]>

Devnagri, a generative AI company based in Noida specialising in personalising business communication for non-English speakers, has raised an undisclosed amount in a Pre-Series A round led by Inflection Point Ventures. 

The newly acquired funds will be used for marketing, sales, technology enhancement, R&D, infrastructure, and administrative expenses.

Devnagri leverages advanced NLP and Small Language Models (SLM) to tailor business communications for diverse linguistic audiences, seamlessly integrating their technology into both private and government infrastructures. This approach addresses the unique linguistic needs of non-English speakers, enhancing communication and engagement.

The company was co-founded by Nakul Kundra and Himanshu Sharma, who collectively bring over 15 years of entrepreneurial experience. Kundra, an MBA in Marketing & Finance, has a strong background in business strategy, while Sharma, also an MBA in Marketing and a skilled coder, combines technical expertise with business acumen.

Devnagri’s innovative solutions have earned them top BLEU scores in Indian languages, significantly impacting the Indian language ecosystem. By expanding from NLP products to generative and SLM, the company empowers customers to personalise their content, meeting the urgent need for localised communication. 

This enables businesses to scale operations efficiently in Tier II and Tier III cities, broadening their reach and engagement in cost-effective ways.

Kundra, co-founder of Devnagri, stated that communication is for the receiver. Hence, the Law of Attraction will only work when businesses communicate well with their audiences. The company is focused on creating hyper-local communication layers to enable businesses to communicate with their customers in their language.

“Taking a step forward, we are moving towards offering the enterprises and Government departments with a private cloud infrastructure, to maintain their ownership of their content and by keeping the LLMs/SLMs trained with every usage by the customer,” said Kundra.

Inflection Point Ventures has invested over INR 720 Cr across 200+ startups to date. Mitesh Shah, Co-Founder of Inflection Point Ventures, emphasised the challenges of translating India’s more than 700 languages and the importance of accuracy, context, and cultural nuances. “The platform ensures precise translations, context-awareness, and localisation, enabling seamless communication across diverse Indian languages,” said Shah.

Devnagri has received numerous prestigious awards, including the TieCon Award 2024 in San Francisco, the Graham Bell Award 2023, a feature in Shark Tank India 2022, and recognition as NASSCOM’s Emerging NLP Startup of India.

The opportunity market for Devnagri is projected to be valued at $100 billion globally by 2030, with $53 billion in India, growing at a CAGR of 6.7%. As the language industry takes shape in India, it will create sub-industries and transform communication for everyone.

The post Indian GenAI Startup Devnagri Secures Funding Led by Inflection Point Ventures appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/indian-genai-startup-devnagri-secures-funding-led-by-inflection-point-ventures/feed/ 0
Unlocking Gen AI’s Potential: The Data Platform Imperative https://analyticsindiamag.com/tech-ai-blend/unlocking-gen-ais-potential-the-data-platform-imperative/ https://analyticsindiamag.com/tech-ai-blend/unlocking-gen-ais-potential-the-data-platform-imperative/#respond Mon, 22 Jul 2024 12:12:11 +0000 https://analyticsindiamag.com/?p=10129820

A robust data platform is the cornerstone of successful GenAI adoption.

The post Unlocking Gen AI’s Potential: The Data Platform Imperative appeared first on AIM.

]]>

The world of AI is abuzz with the latest and perhaps the most significant innovation of our times—generative AI (GenAI). From crafting personalised marketing copy to designing novel materials, GenAI promises to revolutionise industries. But before the CIOs jump on the bandwagon, a crucial foundation needs to be laid out in the form of a robust data platform.

The GenAI Wave: A Wellspring of Business Value

GenAI isn’t just hype, it’s a game-changer. Here are some real-world examples showcasing its power:

  • Unleashing Creativity: Imagine a world where AI can co-write compelling marketing copy, generate unique product designs, or even compose captivating music. This is now a reality with GenAI. For instance, companies like Adobe are leveraging GenAI to automate and enhance creative processes, resulting in faster turnaround times and higher quality outputs, leading to a significant increase in customer engagement.
  • Boosting Efficiency: GenAI can automate repetitive tasks and generate realistic simulations, saving businesses valuable time and resources. For example, Cigna, a healthcare provider, uses GenAI to streamline patient data processing and improve diagnostic accuracy, resulting in better patient outcomes and reduced operational costs.
  • Unlocking New Frontiers: GenAI pushes the boundaries, fostering innovation in various fields. Amazon,, a leading e-commerce platform, integrated artificial intelligence to personalise product recommendations, which led to a 20% increase in sales conversions.

The Pitfalls of Poor Data: Why GenAI Needs a Strong Foundation

A recent incident involving DPD (UK’s parcel service) highlights the challenges and risks associated with poor-quality data and inadequate data platforms, especially for AI applications. 

In this case, DPD’s AI chatbot went rogue and ended up abusing a customer. The issue stemmed from a combination of poorly managed data and insufficient oversight of the AI’s training and operations. The chatbot’s inappropriate behaviour caused customer dissatisfaction and damaged the company’s reputation.

While GenAI holds immense potential, its success hinges on one critical factor—data. Imagine feeding a high-performance engine with low-quality fuel—that’s what happens with GenAI and bad data. Here’s why a robust data platform is essential:

  • Garbage In, Garbage Out: GenAI models learn from the data they’re trained on. Siloed, context-poor data leads to biassed or inaccurate models. For instance, a company relying on sales data without including customer feedback might generate marketing campaigns that miss the mark.
  • Limited Potential: Fragmented data hinders AI’s ability to identify complex patterns and relationships. A strong data platform unlocks the full potential of GenAI by providing a unified view of all relevant data.
  • Wasted Resources: Cleaning and wrangling bad data consumes valuable time and resources. A well-architected data platform automates data management tasks, freeing up resources for innovation.

Despite GenAI’s promising capabilities, its effectiveness is severely hampered by poor-quality data and inadequate data platforms. Bad data—characterised by inaccuracies, inconsistencies, and lack of context—can lead to misleading insights and suboptimal AI model performance. 

Siloed data sources exacerbate the problem. When data is isolated within different business units, it lacks the comprehensive context needed for GenAI to deliver valuable insights.

Importance of a Good Data Platform

A robust data platform is the cornerstone of successful GenAI adoption. It ensures the integration of high-quality, context-rich data from various sources, breaking down silos and creating a unified data ecosystem. Such a platform secures the data and provides essential services and tools for both conventional reporting and advanced AI/ML use cases.

For example, retail giant Walmart implemented a unified data platform that consolidated data from sales, inventory, and customer interactions. This integration enabled the development of an AI model that accurately predicted customer demand, optimised inventory levels, and reduced stock-outs by 30%.

A robust data platform is the bridge that connects your data silos and empowers your GenAI initiatives. Here’s how:

  • Context is King: A good data platform breaks down silos, integrating data from various sources, enriching it with context, and creating a holistic view of your business. This rich data provides the fuel that GenAI models need to thrive.
  • Scalability for Growth: As your data volume grows, a scalable data platform ensures smooth operation. This is crucial for GenAI, which often requires vast amounts of data for training and refinement.
  • Security and Governance: A secure data platform protects your sensitive information while providing controlled access for authorised users. This is critical for responsible AI development and compliance with data privacy regulations.

A good data platform offers several tangible and intangible benefits, including:

  • Enhanced Decision-Making: With accurate and comprehensive data, businesses can make better and timely decisions.
  • Operational Efficiency: Automated data processes and improved data quality lead to significant time and cost savings.
  • Innovation Enablement: A solid data foundation facilitates the development and deployment of innovative AI solutions that drive business growth.

Next Steps for CXOs

CIOs & tech leaders are under pressure to adopt AI but are failing at getting their data house in order. As per the Gartner survey, 61% of CDAOs admitted that the recent disruption of GenAI made them rethink their data management strategies. However, according to another survey, less than half the organisations have a robust data management framework in place.

To harness the full potential of GenAI, CXOs should prioritise the establishment of a robust data platform. Here are actionable steps:

  1. Assess Data Quality: Conduct a thorough audit of current data sources to identify and rectify quality issues.
  2. Break Down Silos: Implement data integration strategies to ensure seamless data flow across all business units.
  3. Invest in Data Platform Technologies: Leverage advanced data management tools and platforms, such as those offered by Google Cloud, to create a scalable and secure data infrastructure.
  4. Foster a Data-Driven Culture: Encourage collaboration and data sharing across departments, emphasising the strategic importance of high-quality data for AI initiatives.
  5. Monitor and Iterate: Continuously monitor data quality and platform performance through a robust data governance framework, making iterative improvements to adapt to evolving business needs and technological advancements.

The future belongs to those who can leverage artificial intelligence to its fullest and everything starts with data. Investing in a robust data platform is not just a technological upgrade, it’s a strategic imperative that will drive sustainable growth and transformative business outcomes.

As we stand at the cusp of this AI-driven revolution, let us build the foundations today for a smarter, more agile, and resilient tomorrow.

Searce is an engineering-led modern tech consultancy that empowers clients to futurify by delivering real business outcomes. We help organisations unify data and connect it with groundbreaking AI to unleash transformative insights.


Reach out today!

The post Unlocking Gen AI’s Potential: The Data Platform Imperative appeared first on AIM.

]]>
https://analyticsindiamag.com/tech-ai-blend/unlocking-gen-ais-potential-the-data-platform-imperative/feed/ 0
India’s Beatoven.ai Shows the World How AI Music Generation is Done Right https://analyticsindiamag.com/ai-origins-evolution/indias-beatoven-ai-shows-the-world-how-ai-music-generation-is-done-right/ https://analyticsindiamag.com/ai-origins-evolution/indias-beatoven-ai-shows-the-world-how-ai-music-generation-is-done-right/#respond Mon, 22 Jul 2024 08:46:56 +0000 https://analyticsindiamag.com/?p=10129783

Within a year, Beatoven.ai amassed more than 100,000 data samples, which were all proprietary for them.

The post India’s Beatoven.ai Shows the World How AI Music Generation is Done Right appeared first on AIM.

]]>

AI music generation is a tricky business. Amidst copyright claims and the need for fairly compensating artists, it becomes an uphill task for AI startups, such as Suno.ai or Udio AI, to gain revenue and popularity. 

However, Beatoven.ai, an Indian AI music startup, has gotten the hang of it in the most ethical and responsible way possible.

One of the most important reasons for that is its co-founder and CEO Mansoor Rahimat Khan is a professional sitar player himself and comes from a family of musicians going back seven generations. “I was very fascinated by this field of music tech,” he said. 

Khan told AIM that he started his journey at IIT Bombay and realised that though there were not many opportunities in India, he wanted to combine his passion for music and technology. 

Beatoven.ai is part of the JioGenNext 2024, Google for Startups Accelerator, and AWS ML Elevate 2023 programs. Khan said that the team applied to many accelerator programs because they realised they needed a lot of compute to fulfil the goal of building an AI music generator. 

The company raised $1.3 million in its pre-series A round led by Entrepreneur First and Capital 2B, with a total funding of $2.42 million.

After switching several jobs, Khan met Siddharth Bhardwaj and building on their shared passions for music and tech founded Beatoven.ai in 2021. “After coming back from Georgia Tech, I got involved in the startup ecosystem, and started working with ToneTag, an audio tech startup funded by Amazon,” said Khan. 

Everyone Needs Background Music in their Life

The co-founders found out that the biggest market was in the generation of sound tracks for Indie game developers, agencies, and production houses. “But when we look at the nitty gritty of the industry, copyrights are a very scary thing. We thought that generative AI could be a solution to this.” Khan said that the idea was to figure out how users could give simple prompts and generate audio.

Mansoor Rahimat Khan with Lucky Ali

The initial idea was to create a simple consumer focused UI where users could select a genre, mood, and duration to generate a soundtrack. But that was when the era of LLM hadn’t started and NLP wasn’t good enough for such tasks. “We started in 2021 before the LLM era, and our venture capital came from Entrepreneur First. We raised a million dollars in 2021 and quickly built our technology from scratch.”

The biggest challenge like every other AI company was the collection of data. “You either partnered with the labels that charged huge licensing fees or scraped [data]. That was the only other option. But if you did that, you would be sued,” said Khan.

All of the Tech

This is where Beatoven.ai takes the edge over other products in the market. Khan and his team started contacting small, and slowly bigger artists for creating partnerships and sourcing their own data. The company had a headstart as no one was talking about this field back then. Within a year, it amassed more than 100,000 data samples, which were all proprietary for them.

During the initial days, Beatoven.ai did not use Transformers. Khan said that it is one of the reasons that the quality was not that great. Later, when Diffusion models came into the picture, the team realised that it is the way forward for AI-based music generation. 

The company started by using different models for different purposes, this included the ChatGPT API from OpenAI. The Beatoven.ai platform also uses CLAP (Contrastive Language-Audio Pretraining), which is mostly used for video generation. 

Apart from this, the company uses latent diffusion models like Stability AI’s Stable Audio, VAE models, and AudioLLM, for different tasks such as individual instruments within the generated music. Then the company uses an Ensemble model for mixing all these individual audios together. 

For inference, the company uses CPUs (instead of GPUs), which keeps it fast and optimised, while reducing costs. 

Trained Fairly

Khan admitted that the audio files generated by Suno.ai’s have superior quality right now, but they also use Diffusion models, which makes them a little slow. “The quality is significantly better from where we started, but it’s not quite there yet.” Khan added that currently the speed is high because the company uses different models for different tasks.

To further expand the data, Beatoven.ai started partnering with several outlets such as Rolling Stone and packaged it like a creator fund. In January 2023, it announced a $50,000 fund for Indie music as a part of the Humans of Beatoven.ai program for expanding their catalogue. 

This gave Beatoven.ai a lot of popularity and many artists wanted to partner with the team. Khan said that the company aims to do more licensing deals to expand music libraries. “When it comes to Indian labels though, they are not yet open to licensing deals,” said Khan. 

Beatoven.ai’s model is certified as Fairly Trained and also certified by AI for Music as an ethically trained AI model.

Apart from music generation, Beatoven.ai is launching Augment, similar to ElevenLabs’s voice generation model. This would allow agencies to connect to Beatoven.ai’s API and train on their own data to make remixes of their own music. For the demo, Khan showed how a simple sitar tune could be turned into a hip-hop remix. 

“You can just use your existing content and create new songs. That’s the idea,” he said.

Currently, Beatoven.ai is also testing a video-to-audio model using Google’s Gemini, where users can upload a video and the model would understand the context and generate music based on that. Khan showed a demo to AIM where the model could also be guided using text prompts for better quality audio generation. 

Not Everyone is a Musician

Khan envisions that in the near future, companies such as Spotify or YouTube start open sourcing their data and offer APIs to make the AI music industry a little more open.

Meanwhile, while speaking with AIM, Udio’s co-founder Andrew Sanchez said, “It’s enabling for people who are just up and coming, who don’t yet have big professional careers, the resources, time or money to really invest in making a career. “It’s enabling a whole new set of creators.” This would make everyone a musician

When it comes to Beatoven.ai, he said that he aims to head in a more B2B direction as building a direct consumer app does not make sense. “I don’t believe everybody wants to create music,” added Khan, saying that not everyone is learning music in the world. That is why, the company is currently focused only on background music without vocals. 

The post India’s Beatoven.ai Shows the World How AI Music Generation is Done Right appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/indias-beatoven-ai-shows-the-world-how-ai-music-generation-is-done-right/feed/ 0
Microsoft Inaugurates New Innovation Hub in Bengaluru https://analyticsindiamag.com/ai-news-updates/microsoft-inaugurates-new-innovation-hub-in-bengaluru/ https://analyticsindiamag.com/ai-news-updates/microsoft-inaugurates-new-innovation-hub-in-bengaluru/#respond Thu, 18 Jul 2024 06:40:00 +0000 https://analyticsindiamag.com/?p=10129445

The company has over 20,000 employees across 10 Indian cities – Ahmedabad, Bengaluru, Chennai, Gurugram, New Delhi, Noida, Hyderabad, Kolkata, Mumbai and Pune.

The post Microsoft Inaugurates New Innovation Hub in Bengaluru appeared first on AIM.

]]>

In an effort to expand its footprint in India, generative AI powerhouse Microsoft has opened a new innovation hub in Bengaluru. Puneet Chandok, president of  Microsoft India and South Asia took to LinkedIn to share the news. 

“Embrace a culture of learning and innovation with our maker spaces for prototypes and special projects. Meet our senior architects, who will steer you through bespoke technical engagements focused on transformative business outcomes” added Chandok.

The company has over 20,000 employees across 10 Indian cities – Ahmedabad, Bengaluru, Chennai, Gurugram, New Delhi, Noida, Hyderabad, Kolkata, Mumbai and Pune.

It has over 14 Microsoft Innovation Centres (MICs) across the country which are part of strategic partnerships with leading academic institutions and aim to grow tech skills. 

The company has announced several collaborations in the generative AI space in India over the last couple of months. For example, it teamed up with Skillsoft to create an AI training program for enterprises, leveraging Skillsoft’s AI Skill Accelerator to teach the use of Microsoft AI tools, including Copilot and Azure Open AI. 

Additionally, Indian IT giant Tech Mahindra is collaborating with Microsoft to implement Copilot for Microsoft 365 across 15 locations, aiming to improve efficiency for over 10,000 employees. Tech Mahindra will also use GitHub Copilot for 5,000 developers, expecting a 35-40% productivity boost.

Back in February of this year, Microsoft co-founder and philanthropist Bill Gates visited the Microsoft India Development Center (IDC) in Hyderabad, a hub of innovation that he envisioned 25 years ago. He expressed his optimism for India’s unique potential in AI and the company’s strategic focus on harnessing the country’s talent for upcoming features in this space.

The company is partnering with Indian startup Sarvam AI, specialising in Indic LLMs. It is also set to upskill two million Indians in AI by 2025.

The post Microsoft Inaugurates New Innovation Hub in Bengaluru appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/microsoft-inaugurates-new-innovation-hub-in-bengaluru/feed/ 0
AWS and iTNT Hub Collaborate to Nurture Generative AI innovation Among Startups in Tamil Nadu https://analyticsindiamag.com/ai-news-updates/aws-and-itnt-hub-collaborate-to-nurture-generative-ai-innovation-among-startups-in-tamil-nadu/ https://analyticsindiamag.com/ai-news-updates/aws-and-itnt-hub-collaborate-to-nurture-generative-ai-innovation-among-startups-in-tamil-nadu/#respond Tue, 16 Jul 2024 07:17:32 +0000 https://analyticsindiamag.com/?p=10129254

AWS has unveiled a new initiative in partnership with Tamil Nadu Technology (iTNT) Hub to establish a Gen AI startup hub program.

The post AWS and iTNT Hub Collaborate to Nurture Generative AI innovation Among Startups in Tamil Nadu appeared first on AIM.

]]>

AWS India Private Limited has unveiled a new initiative in partnership with Tamil Nadu Technology (iTNT) Hub to establish a generative Artificial Intelligence (AI) startup hub program. This program aims to accelerate the development of generative AI solutions for public-centric initiatives through Tamil Nadu’s startup ecosystem.The program will facilitate collaboration between startups and industry to create public sector-focused solutions using generative AI.

It will target startups working in AI, generative AI, and deep-tech domains, with a focus on applications for government, healthcare, education, and non-profit sectors.iTNT Hub, established by the Ministry of Electronics and Information Technology (MeitY) and the Information Technology and Digital Services Department (IT&DS) of Tamil Nadu, is located at Anna University in Chennai.

It aims to create a deep tech innovation network in Tamil Nadu by leveraging the combined strengths of startups, innovators, academia, government, and industry leaders.

Key features of the program include:

  1. Support for startups at various stages of development, from pre-incorporated teams to mature startups.
  2. Mentorship for founders from over 570 engineering colleges affiliated with Anna University.
  3. Access to research opportunities, sectoral guidance, and funding avenues for startups in incubation.

AWS will provide eligible startups with up to $10,000 in AWS credits, allowing them to experiment with over 240 fully featured services, including innovative generative AI solutions like Amazon Bedrock, Amazon Q, and Amazon SageMaker.

The company will also explore onboarding startups to the AWS Partner Network (APN), subject to eligibility criteria. iTNT Hub will offer additional support through:

  • Technical expertise and mentorship
  • Guidance on business and funding fundamentals
  • Industry connections to understand market opportunities

The program will feature webinars, technical workshops, masterclasses, industry connects, roadshows, hackathons, and customer engagements. A steering management committee comprising executives from AWS India and iTNT Hub will govern the program.

This collaboration aims to strengthen the entrepreneurial environment for startups across Tamil Nadu, with a particular focus on empowering those in locations with limited access to resources. By leveraging AWS’s global technology leadership and iTNT Hub’s local expertise, the program seeks to foster innovation and growth in the generative AI space while addressing public sector needs.

The post AWS and iTNT Hub Collaborate to Nurture Generative AI innovation Among Startups in Tamil Nadu appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/aws-and-itnt-hub-collaborate-to-nurture-generative-ai-innovation-among-startups-in-tamil-nadu/feed/ 0
From a Small Town in Maharashtra to Silicon Valley: Aqsa Fulara’s Inspiring Journey with Google https://analyticsindiamag.com/intellectual-ai-discussions/from-a-small-town-in-maharashtra-to-silicon-valley-aqsa-fularas-inspiring-journey-with-google/ https://analyticsindiamag.com/intellectual-ai-discussions/from-a-small-town-in-maharashtra-to-silicon-valley-aqsa-fularas-inspiring-journey-with-google/#respond Tue, 16 Jul 2024 06:53:15 +0000 https://analyticsindiamag.com/?p=10129243

Fulara is responsible for scaling AI and ML products including Recommendations AI and now Meridian models.

The post From a Small Town in Maharashtra to Silicon Valley: Aqsa Fulara’s Inspiring Journey with Google appeared first on AIM.

]]>

Growing up in a small town in Sangli, Maharashtra, Aqsa Fulara, an AI/ML product manager at Google since 2017, like many other women, faced societal norms that often discouraged women from pursuing higher education far from home. 

“Coming from a community where moving out of my parent’s home to a hostel for higher education was frowned upon, I put all my energy towards getting into this prestigious engineering college in the same city,” Fulara told AIM in an exclusive interview.  

Her dedication paid off when she was admitted to Walchand College of Engineering, where she did her BTech in computer science and engineering. This academic achievement was just the beginning. 

Fulara’s passion for learning and her desire to push the boundaries led her to the University of Southern California (USC), where she pursued a master’s degree in engineering management, and since then “there was no looking back!”, Fulara shared gleefully. 

“While my experiences in India provided me with a solid technical foundation and analytical approach to solving problems, my experiences at USC and Stanford focused a lot more on practical applications of cutting-edge technology,” she added. 

According to recent surveys, compared to other developing countries, fewer women in India reported being discouraged from pursuing scientific or technical fields (37% vs. 65%). The primary challenges faced by women students in India are high-stress levels (72%), difficulties in finding internships (66%), and a gap between their expectations and their current curriculum (66%).

Fulara’s path to AI and ML was not marked by a single dramatic moment but rather a gradual buildup of curiosity and fascination with technology. Her inclination towards solving problems and understanding complex systems drew her to this field. 

“That led me to my capstone project on behaviour recognition and predicting traffic congestion in large-scale in-person events and thus, building products for congestion management,” she added. 

Leadership Mantra: Building the Culture of Innovation

If you’re familiar with Google’s Vertex AI Search, you likely know about Recommendations AI. Now branded as Recommendations from Vertex AI Search, this service leverages state-of-the-art machine learning models to provide personalised, real-time shopping recommendations tailored to each user’s tastes and preferences. 

One of the key figures in scaling this product is Fulara, who has been instrumental in its growth since 2021. Fulara has also been the force behind the highly acclaimed products in Google Cloud’s Business Intelligence portfolio, such as Team Workspaces and Looker Studio Pro. 

Fulara considers Looker Studio as one of her favourite projects. “Imagine having a personal data analyst assistant who can provide customised recommendations and help you make informed decisions,” she added. 

Having worked with Google for over seven years now, one thing that Fulara values most about the company is the freedom to explore and innovate. “Whether it’s pursuing a 20% project in a new domain, growing into a particular area of expertise, or participating in company-wide hackathons, Google provides much space for creativity and innovation,” she shared. 

This environment has allowed her to pivot her career towards product management, building on her AI experiences and focusing on delivering business value through customer-centric solutions.

Leading AI product development comes with its own set of challenges. “AI products have a larger degree of uncertainty and ambiguity, with challenges in terms of large upfront investment, uncertain returns, technical feasibility, and evolving regulations,” she explained. 

To manage these challenges, Fulara fosters a culture of experimentation and agility. “We release MVPs for testing far ahead of production cycles to rigorously test and benchmark on production data and user behaviours,” she added, allowing her team to make informed decisions even with incomplete information.

Fulara emphasises the importance of managing scope creep tightly and sharing outcome-based roadmaps upfront. “We’re solving for problem themes, not necessarily just churning out AI features,” she noted. This strategy helps maintain focus and adapt to changes quickly. 

Future of AI 

Looking ahead, Fulara sees generative AI, personalised recommendations, and data analytics as transformative forces in the coming decade, making data and insights more accessible and workflows more collaborative. 

AI and ML models are becoming increasingly pervasive, assisting in personalised shopping journeys, optimising marketing strategies, and improving data-driven decision-making across various industries.

Read more: Beyond Pride Month: True Allyship Needs No Calendar

The post From a Small Town in Maharashtra to Silicon Valley: Aqsa Fulara’s Inspiring Journey with Google appeared first on AIM.

]]>
https://analyticsindiamag.com/intellectual-ai-discussions/from-a-small-town-in-maharashtra-to-silicon-valley-aqsa-fularas-inspiring-journey-with-google/feed/ 0
The Chief Architect of Aadhaar Suggests Indian Govt to Offer ‘Compute as a Bond’ for Generative AI https://analyticsindiamag.com/ai-origins-evolution/the-chief-architect-of-aadhaar-suggests-indian-govt-to-offer-compute-as-a-bond-for-generative-ai/ https://analyticsindiamag.com/ai-origins-evolution/the-chief-architect-of-aadhaar-suggests-indian-govt-to-offer-compute-as-a-bond-for-generative-ai/#respond Mon, 15 Jul 2024 10:50:12 +0000 https://analyticsindiamag.com/?p=10126900

With the Union Budget around the corner, it would be interesting to see the government allocate funds for such initiatives.

The post The Chief Architect of Aadhaar Suggests Indian Govt to Offer ‘Compute as a Bond’ for Generative AI appeared first on AIM.

]]>

India is on the brink of achieving a UPI moment in generative AI, for which it requires the right infrastructure in place. Just as Microsoft revamped Azure for OpenAI, India’s DPI needs a similar transformation. However, it’s easier said than done, as the infrastructure must support the needs of billions of citizens.

The Indian government recently struck a deal with NVIDIA to procure 10,000 GPUs and offer them to local startups, researchers, academic institutions, and other users at a subsidised rate under its INR 10,000 crore India AI Mission. 

Expressing his opinion on the matter, former Aadhaar chief architect Pramod Varma described it as quite ‘tricky’ to evaluate which startups will receive how many GPUs.

“Besides procuring them for their own purposes, the government can potentially create a financial product, like a compute bond—where startups and VCs can buy vouchers by investing into it,” added Varma in an exclusive interview with AIM

He said that this bond would allow the government to pool the demand for computational resources from multiple startups and VCs, giving the government a collective negotiating power.

“The government can say if so many investments have come through this bond, I have a collective negotiation power, instead of each startup trying to negotiate or each VC trying to negotiate for their own startup,” said Varma. 

Coincidently, Varma’s idea echoes closely with OpenAI chief Sam Altman and NVIDIA CEO Jensen Huang’s perspectives, where they envision compute as the new currency, alongside unlocking a $100 trillion industry that can be transformed with AI. 

Altman recently said in an interview with Lex Fridman, “Compute is going to be the currency of the future. It may become the most valuable commodity in the world, and we should invest significantly in expanding compute resources.”

In a similar vein, Altman proposed a concept where everyone would have access to a portion of GPT-7’s computing resources. “I wonder if the future looks something more like ‘universal basic compute’ than universal basic income, where everyone receives a slice of GPT-7 compute,” Altman speculated.

This idea aligns with viewing compute as a bond, where governments invest in building necessary infrastructure to support widespread access to AI resources, including high-speed internet, data centers, and other computing facilities. With the Union Budget around the corner, it would be interesting to see the government allocate funds for such initiatives.

Towards Open Cloud Compute

People+AI recently launched Open Cloud Compute (OCC), which is a project that seeks to create an open network of compute resources, making it easier for businesses, especially startups, to access the compute power they need without being locked into specific cloud providers.

“Our idea is to have many, many micro data centers collectively behaving like a mega data centre. That is our network model,” said Varma, drawing an analogy that one bee doesn’t matter, but a swarm of bees is suddenly very powerful.

Several compute providers are emerging in India, including Yotta, E2E Networks, Johnaic, Ola Krutrim, and Jarvislabs. Although these providers are not as large as hyperscalers like Microsoft Azure, AWS, and Google Cloud, they have the potential to challenge them collectively.

Yotta, backed by the Hiranandani group, is set to establish a GPU infrastructure with 32,768 GPUs by the end of 2025. Meanwhile, NeevCloud plans to acquire 40,000 GPUs by 2026. Jarvislabs has also secured access to thousands of NVIDIA H100s.

“One of the challenges these smaller players are facing is discoverability. Moreover, besides helping them with marketing, OCC also aims to support them in figuring out the right tax subsidies, infrastructural policies and ease of doing business,” Tanvi Lall, the director of strategy at People+AI, told AIM.

Varma explained that OCC allows lower capital business and SMEs to come in and play the larger game. “Today you can perfectly imagine a 105,000 square feet data center suddenly becoming 500,000 square feet of compute.”

To date, OCC has already teamed up with 24 technology partners, including the likes of Oracle Cloud, Vigyan Labs, Protean Cloud, Dell, NeevCloud, and Tata Communications, among others.

He said the concept is similar to how Microsoft Azure or AWS works. They could have ten or more data centers in the country, but users are only concerned with the APIs they provide. 

Any user can simply log into the Open Cloud Compute Network. Upon typing their requirements in the search box, they will be shown multiple providers. Each provider card displays the location, usage cost, and a green rating of the provider. It also includes the SLAs promised by the provider.

“The idea is to create an open network of providers coming together powered by protocols rather than a platform aggregating everything. They can remain decentralized, but they can be discovered, transacted, or contracted through a protocol, and that’s a very, very interesting way, as big as the internet,” added Varma.

OCC aligns with national initiatives like the India AI mission, which aims to build a scalable AI computing infrastructure by deploying thousands of GPUs through public-private collaborations.

According to Lall, hyperscalers like AWS, Azure and Google Cloud could even be part of the network. “ Our entire approach is plus one. The idea is not to leave certain folks out. There is room for everybody and there is a reason why we approached Oracle Cloud.” 

“If the hyperscalers feel they have something to offer in the Indian context, they are welcome. It would also be foolish for us to imagine that everyone will abandon the big CSPs as and when the network is up and running,” explained Lall.

The post The Chief Architect of Aadhaar Suggests Indian Govt to Offer ‘Compute as a Bond’ for Generative AI appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/the-chief-architect-of-aadhaar-suggests-indian-govt-to-offer-compute-as-a-bond-for-generative-ai/feed/ 0
‘India’s UPI Moment in AI Will Be Driven by Usage, Not Production,’ Says Aadhaar Architect Pramod Varma https://analyticsindiamag.com/ai-origins-evolution/indias-upi-moment-in-ai-will-be-driven-by-usage-not-production/ https://analyticsindiamag.com/ai-origins-evolution/indias-upi-moment-in-ai-will-be-driven-by-usage-not-production/#respond Sun, 14 Jul 2024 13:49:55 +0000 https://analyticsindiamag.com/?p=10126818

Former chief architect of Aadhaar Pramod Varma believes that India’s future is voice-based AI.

The post ‘India’s UPI Moment in AI Will Be Driven by Usage, Not Production,’ Says Aadhaar Architect Pramod Varma appeared first on AIM.

]]>

Last year, AIM probed when India’s ‘UPI Moment’ in AI would arrive, pointing towards the potential for a locally contextualised AI model as a digital public good. Little did the world know that the work had already started and that it was arriving sooner than expected. 

“We will see a UPI moment in India for generative AI through the usage of AI, more than the production of AI,” said Pramod Varma, expressing uncertainty over whether another OpenAI will emerge from India but but wished AI companies in India (the likes of Krutrim AI, Sarvam AI, and SML) well on their effort in building advanced foundational models. 

Varma is the former chief architect of Aadhaar & India Stack and the founding architect of the Beckn Protocol. He is currently serving as the chief technology officer at EkStep Foundation and building digital public infrastructure (DPI) for generative AI in India through People+AI’s initiative of Open Cloud Compute. 

A few days ago, OpenAI vice president Srinivas Narayanan expressed his excitement about potential collaborations and praised the diversity in AI applications being built in Bengaluru. “It is inspiring to see the ambition in their thinking and the diversity in the applications,” he said. 

In an exclusive interview with AIM, Varma showed a similar excitement. He believed that India’s ‘UPI moment’ in AI would arrive quicker if we maximised the utility of existing models to address challenges instead of building foundational models like GPT-4 from scratch. 

“I have a feeling that we will see generative AI penetrating through art, media, the movie industry, transport, commerce—everywhere,” said Varma, adding that he sees no reason for it not to happen because we currently live in a very low-performing equilibrium.

A recent report showed that only 22% of Indians are leveraging generative AI for work purposes in healthcare and research, while 76% plan to use it in the next two to five years. “I can only say there is no human being who is not working on generative AI,” said CP Gurnani, the former Tech Mahindra chief, at AIM’s MachineCon GCC Summit recently. 

“The Indian path in AI is different. We are not in the arms race to build the next LLM. Let people with capital, let people who want to pedal ships do all that stuff… We are here to make a difference and our aim is to give this technology in the hands of people,” said Nandan Nilekani, the co-founder and non-executive chairman of Infosys. 

On the contrary, Ola Krutrim chief Bhavish Aggarwal holds a different opinion. He recently said that India has the largest number of developers, silicon designers, data, and IT services in the world. “India can do to AI what China did to manufacturing,” he added, saying that it won’t automatically happen unless we make it happen. 

DPI to the Power of AI 

In a recent interview, Nilekani discussed how India has benefited immensely from DPI, Aadhaar, and UPI and how AI could lead to similar transformative changes. 

He explained that applying AI at a population scale could have an even greater impact. “We call this DPI to the power of AI,” Nilekani added, emphasising that AI can elevate DPI to the next level.

Varma explained that AI is already integrated into GST and Aadhaar. “AI is used extensively within GST to detect circular loops, fraudulent patterns, and process invoices,” he said, highlighting its widespread application.

“You can now introduce voice interfaces. If you can do voice-based payment, UPI will go through the roof. We will see AI play into DPI to further expand and bring efficiency and expansion of DPI,” he continued.  

Varma believes that India’s future is voice-based AI. “Indian entrepreneurs should really look at voice as a completely new human-computer interaction method. It could be very powerful, and I think it’s gonna happen because voice is natural to humans.”

In a previous interaction with AIM, Sarvam AI said that it is currently working on a voice-based Indic LLM and plans to release it this year. 

Meanwhile, the IISc’s ARTPARK (Artificial Intelligence & Robotics Technology Park) is preparing to open-source 16,000 hours of spontaneous speech data from 80 districts as part of Project Vaani, under the Ministry of Electronics and Information Technology’s flagship AI initiative, BHASHINI.

India’s Tryst with Generative AI 

In India, approximately 10 AI startups are currently engaged in developing LLMs from the ground up. Recently, Gurnani unveiled Project Indus, a ChatGPT-like product developed for less than $5 million. 

Despite initiatives like Krutrim AI and Hanooman, these chatbots have yet to achieve the sophistication of GPT-4o or Claude 3.5

According to Similarweb, Krutrim saw a decline in user numbers from 498.7K visits in April 2024 to 309.2K visits in June 2024. In contrast, SML’s Hanooman had only 80K visits in June 2024, but the company aims for Hanooman to reach 200 million users within its first year of launch.

“I feel it’s unfair to tell them that it’s not working. It takes time. Every business goes through the cycle of production, distribution, and unlocking value for their customers,” said Varma.

Citing healthcare, Varma said that hundreds of enterprises won’t rely solely on one advanced model such as OpenAI’s GPT-4 or Anthropic Claude 3.5 Sonnet, and said: “Both specialised and generic models will coexist and that one is not a replacement for the other.” 

“It’s the wrong dream to believe we only need one API. We will need both mega models aiming for AGI and specialised micro models that can integrate into today’s workflows. 

“These models will bring enormous efficiency, accuracy, and cost savings to various sectors—from industrial to healthcare,” he said, giving the example of a specialised model that understands Indian laws and can draft contracts in Indian languages. 

AWS’ chief medical officer Rowland Illing also told AIM how AWS is looking to democratise access to foundational models, saying, AWS wants to achieve that with partnerships like Anthropic, offering early releases of models such as Claude 3.5 through Bedrock, providing users with a wide selection of models and tools for secure and efficient generative AI applications.

During the pandemic, Co-WIN was built natively on AWS, supporting 2.2 billion vaccination doses across the country. DigiLocker, also powered by AWS, saw the transition from paper-based certificates and documents to digital storage.

The post ‘India’s UPI Moment in AI Will Be Driven by Usage, Not Production,’ Says Aadhaar Architect Pramod Varma appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/indias-upi-moment-in-ai-will-be-driven-by-usage-not-production/feed/ 0
Phenomenal AI Launches India’s First Text-to-Video AI Platform https://analyticsindiamag.com/ai-news-updates/phenomenal-ai-launches-indias-first-text-to-video-ai-platform/ https://analyticsindiamag.com/ai-news-updates/phenomenal-ai-launches-indias-first-text-to-video-ai-platform/#respond Fri, 12 Jul 2024 09:46:13 +0000 https://analyticsindiamag.com/?p=10126673

Phenomenal AI India's first platform to bridge the gap between ideas and visuals with AI-powered video generation platform.

The post Phenomenal AI Launches India’s First Text-to-Video AI Platform appeared first on AIM.

]]>

Phenomenal AI is an Indian startup that launched the country’s first text-to-video AI platform, enabling users to create professional-grade videos from text inputs. Founded by Sanjay Rodrigues and team, it aims to revolutionize video content creation across industries with an intuitive, cost-effective solution that simplifies high-quality video production.

Behind the Innovation

Founded in late 2023 by Sanjay Rodrigues, Meenakshi Mukherjee, Firoz F Merchant, and Vyom Mahamunkar, Phenomenal AI comprises a team of 15 individuals across Asia, Europe, and North America.

Key Features and Capabilities in Phenomenal AI

Phenomenal AI’s text-to-video generator offers:

  • An intuitive interface that eliminates technical complexities
  • High-quality video generation “in a fraction of the time compared to traditional methods”
  • Cost-effective solutions, eliminating the need for expensive equipment and crews
  • A variety of styles, music, voiceovers, and scenes to choose from

Sanjay Rodrigues, Founder and CEO of Phenomenal AI, announced the launch on LinkedIn1,

“India joins the league of extraordinary GenAI video platforms with #PhenomenalAI, India’s first text-to-video generative AI system.” He shared a sample video generated using the platform, showcasing its ability to transform text prompts into visually stunning content. The prompt for the video was “An enchanted forest with tall trees and dappled sunlight. A graceful deer stands alert among the trees on a moss-covered forest floor.”

PhenomenalAI Empowering Storytelling with AI

Phenomenal AI’s platform2 is designed to bridge the gap between ideas and visuals, sparking a new era of creativity fueled by AI-powered video generation. The company’s statement highlights that this tool will benefit professionals who rely on video production services, as well as individuals seeking to create high-quality videos for personal use. The platform’s intuitive interface eliminates technical complexities, allowing users to produce videos quickly and efficiently without the need for expensive equipment or crews.
The startup, founded in late 2023 by Sanjay Rodrigues, Meenakshi Mukherjee, Firoz F Merchant, and Vyom Mahamunkar, operates with a team of 15 individuals across Asia, Europe, and North America. Phenomenal AI has also partnered with IIT-Gandhinagar to develop India-centric solutions that cater to the country’s diverse needs.
Currently available in closed beta, Phenomenal AI’s text-to-video generator is poised to become a leading force in AI-powered content creation. By empowering everyone, regardless of technical expertise, to create high-quality videos, Phenomenal AI aims to drive innovation, shape the future of content creation, and help turn anyone into a storyteller in India and beyond.

  1. https://www.linkedin.com/feed/update/urn:li:activity:7216722294159601666/ ↩
  2. https://www.phenomenalai.in/ ↩

The post Phenomenal AI Launches India’s First Text-to-Video AI Platform appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/phenomenal-ai-launches-indias-first-text-to-video-ai-platform/feed/ 0
SAWiT.AI Announces Initiative to Train 500,000 Women in Gen AI Skills https://analyticsindiamag.com/ai-news-updates/sawit-ai-announces-initiative-to-train-500000-women-in-gen-ai-skills/ https://analyticsindiamag.com/ai-news-updates/sawit-ai-announces-initiative-to-train-500000-women-in-gen-ai-skills/#respond Wed, 10 Jul 2024 12:55:20 +0000 https://analyticsindiamag.com/?p=10126458

Studies by the McKinsey Global Institute estimate that increasing female workforce participation by just 10% could add a staggering $550 billion to India's GDP.

The post SAWiT.AI Announces Initiative to Train 500,000 Women in Gen AI Skills appeared first on AIM.

]]>

SAWiT.AI, a collaborative effort between South Asia Women in Tech (SAWiT) and GUVI,has recently announced an initiative to train 500,000 Indian women in AI skills. 

This is the world’s largest women-only Generative AI Learning Challenge, aims to bridge the gender gap in the workforce and position India as a global leader in AI innovation.

India’s female workforce participation rate was a mere 24% in 2022, significantly lower than China’s 61%. This low participation rate is a critical issue as over half of urban women are homemakers. 

Studies by the McKinsey Global Institute estimate that increasing female workforce participation by just 10% could add a staggering $550 billion to India’s GDP.

The initiative will provide hands-on experience with leading AI tools, supported by a distinguished advisory council that includes Roshni Nadar Malhotra, Chairperson of HCL Tech, Samantha Ruth Prabhu, an acclaimed actor and women’s empowerment advocate, and Farzana Haque, a Senior Leader at Tata Consultancy Services.

The SAWiT.AI initiative comprises three main events:

  • SAWiT.AI Learnathon (September 21, 2024): A hands-on learning experience in Generative AI.
  • SAWiT.AI Hackathon (October 2024): The world’s largest women-led Generative AI challenge, where teams develop advanced AI applications.
  • SAWiT.AI Festival (November 2024): Celebrating Generative AI innovation, awarding challenge winners, and recognizing pioneering institutions, partners, and sponsors.

Women from both tech and non-tech backgrounds are encouraged to apply. Interested candidates can register at the SAWiT.AI registration page for a nominal fee of INR 99, with the deadline for registration set for September 18, 2024.

The post SAWiT.AI Announces Initiative to Train 500,000 Women in Gen AI Skills appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/sawit-ai-announces-initiative-to-train-500000-women-in-gen-ai-skills/feed/ 0
Only 22% of Indians are Using GenAI for Work Purposes https://analyticsindiamag.com/ai-news-updates/only-22-of-indians-are-using-genai-for-work-purposes/ https://analyticsindiamag.com/ai-news-updates/only-22-of-indians-are-using-genai-for-work-purposes/#respond Wed, 10 Jul 2024 09:39:46 +0000 https://analyticsindiamag.com/?p=10126420

Elsevier's study which surveyed nearly 3,000 people globally working in research and healthcare shows that while 95% believe that it is a great source of knowledge, 87% think that using genAI tools can improve work quality.

The post Only 22% of Indians are Using GenAI for Work Purposes appeared first on AIM.

]]>

According to Elsevier’s report called “Insights 2024: Attitudes toward AI”, only 22% of Indians are leveraging generative AI for work purposes in healthcare and research while 76% plan to use it in the next two to five years.

In comparison to North America (30%), more respondents from APAC, including India, have used AI for work-related purposes (34%).

The study which surveyed nearly 3,000 people globally working in research and healthcare further discovered that while 95% believe that it is a great source of knowledge, 87% think that using generative AI tools can improve work quality.

Recently CP Gurnani, former tech Mahindra chief said AIM MachineCon GCC Summit, “I can only say there is no human being, including you, who is not working on generative AI.”

Can Generative AI Boost Productivity?

Most studies conducted globally have shown that AI, rather generative AI, can improve productivity for employees, allowing them to have time to focus on other areas of work, as well.

A fresh study from Capgemini, also published today, shows that generative AI is expected to play a key role in augmenting the software workforce, assisting in more than 25% of software design, development, and testing work in the next two years.

But this is contrasting to Genpact’s recently published report “The GenAI Countdown” which stated that 52% of respondents expressed concerns that an overemphasis on productivity could lead to a negative impact on employee experiences​​.

Even when AIM reached out to multiple companies to understand how generative AI has boosted its productivity leading to success, such as through faster product deployment, we received no responses.

Challenges Persist

In India, many healthcare companies are actively tapping into the potential of generative AI. From startups like Practo and Healthify to hospital chains like Apollo and Narayana, everyone is exploring this segment.

But in India, as well, as globally, the primary concerns around this technology revolve around misinformation. As per the survey, around 94% of people are concerned about AI being used for misinformation, 86% worry about critical errors or mishaps and 81% fear AI will erode critical thinking.

However, there is a clear need for transparency and reliable sources to build trust in AI tools since AI is expected to increase the volume of scholarly and medical research rapidly. So 71% expect AI tools’ results to be based on high-quality trusted sources.

Mira Murati, the chief technology officer of OpenAI, also acknowledged the same risks and concerns which lead to biases in LLM-based products in a recent interview.

The post Only 22% of Indians are Using GenAI for Work Purposes appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/only-22-of-indians-are-using-genai-for-work-purposes/feed/ 0
Shiprocket and Snowflake Partner to Boost Data Infrastructure and GenAI https://analyticsindiamag.com/ai-news-updates/shiprocket-and-snowflake-partner-to-boost-data-infrastructure-and-genai/ https://analyticsindiamag.com/ai-news-updates/shiprocket-and-snowflake-partner-to-boost-data-infrastructure-and-genai/#respond Wed, 10 Jul 2024 07:43:02 +0000 https://analyticsindiamag.com/?p=10126359

The partnership claims to reduce data processing time from days to minutes and merchants can gain real-time data insights, optimising operations and decision-making.

The post Shiprocket and Snowflake Partner to Boost Data Infrastructure and GenAI appeared first on AIM.

]]>

Indian eCommerce logistics and shipping software solution provider Shiprocket and AI data cloud giant Snowflake have joined hands to improve data operations and provide faster access to data for over one lakh of its merchants in India. This collaboration is an effort to help businesses make quicker, data-driven decisions and scale data infrastructure.

The firm’s integration with Snowflake claims to reduce data processing time from days to minutes and merchants can gain real-time data insights, optimising operations and decision-making.

“This collaboration with Snowflake is a transformative milestone for our 1.5 lakhs-strong seller community, collectively driving an annualised GMV of over three billion dollars,” said Saahil Goel, managing director and chief executive officer of Shiprocket.

Moreover, the McKinsey-backed company plans to explore generative AI by developing chatbots that allow sellers to interact with their data using natural language to improve accessibility and user experience through Snowflake AI Data Cloud. 

Recently, Snowflake introduced new features to Snowflake Cortex AI and Snowflake ML for expanding access to enterprise AI through a no-code interactive interface and providing access to leading LLMs. The new features are built on Meta’s Llama 3 and Mistral large models. 

“As Shiprocket expands its operations, Snowflake’s AI Data Cloud provides a scalable, cost-effective, secure platform to support their diverse data needs to drive business value” said Vijayant Rai, MD India, Snowflake.

In a previous interaction with AIM, Rai said, “We’re looking at India in a multi-dimensional way,” Rai said, underscoring the company’s diverse operations. This includes a significant presence in Pune, where a team of 500 professionals handles operations and support. Additionally, Snowflake is leveraging India as a hub for global customers through its Global Capability Centers (GCCs).

It is addressing the need for skilled developers by providing extensive training and certification programs in rural India. This expands Tier-2 and Tier-3 locations, and small villages in India.

The post Shiprocket and Snowflake Partner to Boost Data Infrastructure and GenAI appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/shiprocket-and-snowflake-partner-to-boost-data-infrastructure-and-genai/feed/ 0
AWS’ Risky $50 Mn Generative AI Bet to Transform the Public Sector https://analyticsindiamag.com/ai-origins-evolution/aws-risky-50-mn-generative-ai-bet-to-transform-the-public-sector/ https://analyticsindiamag.com/ai-origins-evolution/aws-risky-50-mn-generative-ai-bet-to-transform-the-public-sector/#respond Fri, 05 Jul 2024 13:39:13 +0000 https://analyticsindiamag.com/?p=10126028

Companies like Microsoft, Google, and IBM provide cloud services for the public sector; however, the AWS initiative is unique and first of its kind.

The post AWS’ Risky $50 Mn Generative AI Bet to Transform the Public Sector appeared first on AIM.

]]>

At the recently concluded AWS Summit held in Washington, DC, Dave Levy, vice president of AWS public sector, announced a significant new program: the AWS Public Sector Generative Artificial Intelligence (AI) Impact Initiative. 

This $50 million investment aims to accelerate AI innovation across government, nonprofit, education, healthcare, and aerospace sectors, along with comprehensive training and technical expertise. 

The initiative, which will run from June 26, 2024, through June 30, 2026, will leverage AWS’s suite of generative AI services and infrastructure, which includes cutting-edge technologies such as Amazon Bedrock, Amazon Q, Amazon SageMaker, AWS HealthScribe, AWS Trainium, and AWS Inferentia.

AWS will consider factors such as the customer’s technology experience, project maturity, evidence of future adoption, and generative AI skills when determining credit issuance.

Public sector leaders are increasingly turning to generative AI to address pressing challenges. These include resource optimisation, evolving societal needs, improving patient care, personalising education, and enhancing security measures. 

The AWS initiative aims to support these efforts by providing the necessary tools and resources.

“This initiative builds on our ongoing commitment to the safe, secure, and responsible development of AI technology,” said Levy, citing the National Artificial Intelligence Research Resource (NAIRR).

Why Only $50 Mn? 

“We aimed to set a mark high enough to enable significant projects, from small proofs of concept (PoCs) to full-scale production, without excluding smaller participants,” Levy told AIM, saying that $50 million is only for its first initiative, with adjustments possible based on future success and demand. 

Meanwhile, Microsoft Azure, Google Cloud, and Oracle are providing cloud services for the public sector with Microsoft Azure Government, Google Cloud Public Sector, and Oracle cloud infrastructure, respectively. 

However, the AWS initiative is unique as no other big-tech cloud providers offer similar programs. 

Levy believes that the best metric to measure the success of this initiative is to attract as many public sector players as possible. 

“This is only available to support public sector initiatives, and so we hope it gets fully subscribed,” said Levy, adding that once this is done, they can do even more in the future for a bigger programme. 

“We’re just at the very beginning around the world in generative AI; I think we’re really just getting started,” he added. 

Offers Flexibility to Customers like no other 

AWS has a history of working closely with public sector organisations in India, including MeitY, the health and family welfare ministry, and the Telangana and Madhya Pradesh governments, among others. 

Now, with generative AI, the potential is huge. According to AWS, the service “offers access to high-performing foundation models from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API. It also provides a broad set of capabilities necessary for building generative AI applications with a focus on security, privacy, and responsible AI,” shared Levy. 

AWS has also introduced several other major partnerships and initiatives in the generative AI space:

Levy emphasised the significance of customer trust, saying, “Customers trust us to maintain the integrity of their most sensitive assets and their most sensitive missions. We prioritise governance while exploring and incorporating emerging technologies, like generative AI.”

The post AWS’ Risky $50 Mn Generative AI Bet to Transform the Public Sector appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-origins-evolution/aws-risky-50-mn-generative-ai-bet-to-transform-the-public-sector/feed/ 0
Why Data Quality Matters in the Age of Generative AI https://analyticsindiamag.com/ai-insights-analysis/generative-ai/ https://analyticsindiamag.com/ai-insights-analysis/generative-ai/#respond Thu, 04 Jul 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10125699

GenAI can augment human intelligence by identifying patterns and correlations that humans may miss.

The post Why Data Quality Matters in the Age of Generative AI appeared first on AIM.

]]>

In the dynamic realm of data engineering, the integration of Generative AI is not just a distant aspiration; it’s a vibrant reality. With data serving as the catalyst for innovation, its creation, processing, and management have never been more crucial.

“While AI models are important, the quality of results we get are dependent on datasets, and if quality data is not tapped correctly, it will result in AI producing incorrect results. With the help of Gen AI, we are generating quality synthetic data for testing our models,” said Abhijit Naik, Managing Director, India co-lead for Wealth Management Technology at Morgan Stanley.

Speaking at AIM’s Data Engineering Summit 2024, Naik said that Gen-AI, machine learning, neural networks, and deep learning models that we have, is the next stage of automation post the RPA era.

“Gen AI will always generate results for you. And when it generates results for you, sometimes it hallucinates. So, what data you feed becomes very critical in terms of the data quality, in terms of the correctness of that data, and in terms of the details of data that we feed,” Naik said. 

However, it’s important to note that human oversight is crucial in this process, Naik added. When integrated carefully into existing pipelines and with appropriate human oversight, GenAI can augment human intelligence by identifying patterns and correlations that humans may miss.

The task of documenting every aspect of their functioning and the knowledge they derive from data is a complex one. This underscores the need for caution and thorough understanding when integrating Generative AI. 

Due to their vast size and training on extensive unstructured data, generative models can behave in unpredictable, emergent ways that are difficult to document exhaustively.  

“This unpredictability can lead to challenges in understanding and explaining their decisions” Naik said.

GenAI in Banking

Naik emphasised GenAI’s importance in the banking and finance sectors. “It can generate realistic synthetic customer data for testing models while addressing privacy and regulatory issues. This helps improve risk assessment,” he added.

This is especially critical when accurate data is limited, costly, or sensitive. A practical example could be creating transactional data for anti-fraud models.

Gen AI models, including Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and massive language models like GPT, may generate synthetic data that mimics the statistical features of real-world datasets. 

For example, Capital One and JPMorgan Chase use GenAI to strengthen their fraud and suspicious activity detection systems. Morgan Stanley implemented an AI tool that helps its financial advisors find data anomalies in detecting fraud, and Goldman Sachs uses GenAI to develop internal software.  Customers globally can benefit from 24/7 accessible chatbots that handle and resolve concerns, assist with banking issues, and expedite financial transactions.

A recent study showed that banks that move quickly to scale generative AI across their organisations could increase their revenues by up to 600 bps in three years.

“Of course, the highly regulated nature of banking/finance requires careful model governance, oversight, explainability and integration into existing systems,” Naik concluded.

The post Why Data Quality Matters in the Age of Generative AI appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-insights-analysis/generative-ai/feed/ 0
For Infosys, Generative AI is Not Just Fluff Talk https://analyticsindiamag.com/ai-news-updates/for-infosys-generative-ai-is-not-just-fluff-talk/ https://analyticsindiamag.com/ai-news-updates/for-infosys-generative-ai-is-not-just-fluff-talk/#respond Thu, 27 Jun 2024 06:04:39 +0000 https://analyticsindiamag.com/?p=10125015

Infosys is currently managing over 225 generative AI programs for its clients, and has upskilled 2.5 lakh employees with generative AI.

The post For Infosys, Generative AI is Not Just Fluff Talk appeared first on AIM.

]]>

Nandan Nilekani, the co-founder of Infosys has revealed that its early investment in generative AI, through its Infosys Topaz platform, has positioned it as a leader in AI services, with recognition from seven out of eight leading analysts.

“Infosys is fully prepared to deliver value,” Nilekani stated. “We made an early investment last year in building a strong generative AI offering portfolio through Infosys Topaz. Today, we are ranked as a leader in AI services by seven out of eight leading analysts.”

Infosys is currently managing over 225 generative AI programs for its clients. A key aspect of executing these complex transformations is talent. The acquisition of Danske IT and Support Services in India has strengthened Infosys’ digital talent pool. 

“We have invested significantly in hiring talent with proven generative AI skills as well as rapidly upskilling our existing engineering talent,” Nilekani emphasised. Infosys now boasts over 250,000 employees trained in generative AI.

The integration of generative AI components into all service lines and the development of 25 playbooks have allowed Infosys to create significant impacts for its clients. By combining AI with cloud capabilities via Infosys Cobalt, clients are scaling their AI operations more effectively. This effort is part of a broader strategy to drive exponential growth in AI and advance the company’s Chip-to-Cloud strategy.

To bolster this strategy, Infosys acquired InSemi, a semiconductor design services provider, enhancing its domain-relevant enterprise AI capabilities. “We have created 23 AI industry blueprints to solve industry-specific challenges,” Nilekani noted. Strategic acquisitions, such as that of in-tech, an engineering R&D services firm, further deepen Infosys’ capabilities, particularly in the automotive sector with software-defined vehicles.

The Love for Indian Developers

Additionally, Infosys is one of the largest adopters of GitHub Copilot globally, with employees generating over three million lines of code using generative AI large language models

“We’re working on projects across software engineering, process optimisation, customer support, advisory services, and sales and marketing,” said Salil Parekh, Infosys CEO, during the company’s last quarterly call. “We’re working with market-leading open access and closed large language models,” he added, saying that Infosys feels good about its work with generative AI.

Just this month, in a significant move towards accelerating digital transformation in India, GitHub has partnered with Infosys to launch the first GitHub Centre of Excellence in Bangalore. This initiative aims to leverage AI and advanced software solutions to drive global economic growth.

The post For Infosys, Generative AI is Not Just Fluff Talk appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/for-infosys-generative-ai-is-not-just-fluff-talk/feed/ 0
AWS Announces $50 Million Generative AI Initiative for Public Sector https://analyticsindiamag.com/ai-news-updates/aws-announces-50-million-generative-ai-initiative-for-public-sector/ https://analyticsindiamag.com/ai-news-updates/aws-announces-50-million-generative-ai-initiative-for-public-sector/#respond Wed, 26 Jun 2024 15:30:00 +0000 https://analyticsindiamag.com/?p=10124853

The initiative will provide up to $50 million in AWS Promotional Credits, training, and technical expertise.

The post AWS Announces $50 Million Generative AI Initiative for Public Sector appeared first on AIM.

]]>

At the AWS Washington DC event, the company has unveiled the AWS Public Sector Generative Artificial Intelligence (AI) Impact Initiative, a two-year, $50 million investment aimed at accelerating AI innovation in public sector organisations. 

The initiative, which will run from June 26, 2024, through June 30, 2026, seeks to support critical missions in government, nonprofit, education, healthcare, and aerospace sectors through AWS generative AI services and infrastructure, including Amazon Bedrock, Amazon Q, Amazon SageMaker, AWS HealthScribe, AWS Trainium, and AWS Inferentia.

The initiative will provide up to $50 million in AWS Promotional Credits, training, and technical expertise. Determination of credit issuance will consider factors such as the customer’s experience with technology solutions, project maturity, evidence of future adoption, and generative AI skills. 

This program is open to both new and existing AWS Worldwide Public Sector customers and partners globally, who are building generative AI solutions to address pressing societal challenges.

Public sector leaders are increasingly looking to generative AI to enhance efficiency and agility amidst challenges like resource optimisation, evolving needs, patient care improvement, personalised education, and strengthened security. 

AWS is committed to helping these organisations leverage generative AI and cloud technologies to make a positive societal impact.

“This initiative builds on our ongoing commitment to the safe, secure, and responsible development of AI technology,” said Dave Levy, Vice President of AWS Public Sector. “We are contributing to programs like the National Artificial Intelligence Research Resource and the U.S. Artificial Intelligence Safety Institute Consortium to ensure AI’s safe and ethical development.”

The initiative will offer tailored training, expertise from the Generative AI Innovation Center, technical support, networking opportunities, and global thought leadership platforms. These resources aim to help public sector entities ideate, identify, and implement secure generative AI solutions.

The post AWS Announces $50 Million Generative AI Initiative for Public Sector appeared first on AIM.

]]>
https://analyticsindiamag.com/ai-news-updates/aws-announces-50-million-generative-ai-initiative-for-public-sector/feed/ 0
Sabre Uses GenAI Tools to Boost Productivity & Innovation for 800 Software Engineers https://analyticsindiamag.com/intellectual-ai-discussions/sabre-uses-genai-tools-to-boost-productivity-innovation-for-800-software-engineers/ Thu, 20 Jun 2024 07:43:17 +0000 https://analyticsindiamag.com/?p=10124069

Apart from internally experimenting with AI, it has AI-powered models like SabreMosaic and Sabre Travel AI.

The post Sabre Uses GenAI Tools to Boost Productivity & Innovation for 800 Software Engineers appeared first on AIM.

]]>

Texas-based leading travel industry software and technology provider, Sabre, has been leveraging AI and ML for a while now. In a recent interaction with AIM, Sandeep Bhasin,  VP (software engineering) at Sabre, said that internally, the company has introduced generative AI tools to approximately 800 software engineers, significantly enhancing productivity and innovation. 

By leveraging generative AI, the company aims to improve the efficiency of its development processes and bring new products to market faster.

Apart from internally experimenting with AI, it recently unveiled SabreMosaic. This AI-powered modular platform shifts from the traditional PNR system to a modern offer and order approach and enables airlines to offer personalised and dynamic retail experiences. 

Traditional airline processes relied heavily on the PNR to manage traveller information, but this system has struggled to evolve with the expansion of airline product offerings. The rise of modern retailers like Amazon has shifted traveller expectations towards personalised experiences. 

“Travellers now expect the same level of personalisation from their travel providers as they have come to expect in other B2C retailing spaces,” added Bhasin.

This shift is driving airlines to adopt modern retailing platforms like SabreMosaic, which supports both PNR and Offer-Order systems. Offer-Order systems optimise and personalise travel offers, allowing airlines to sell a variety of products beyond just flight tickets. 

“What excites me is how this dual support is crucial as airlines transition from being mere suppliers of seat inventory to modern retailers of diverse travel content,” he added. 

The product is driven by the company’s suite of AI/ML-powered solutions called Sabre Travel AI, developed using Google Cloud’s Vertex AI platform. The team also uses other models like multi-cloud data warehouse Big Query for its data lake.

Back in 2022, Sabre announced a 10-year-long partnership with Google to leverage its capabilities for new innovation. The former’s extensive travel data and this partnership will enable it to provide effective, data-driven recommendations. 

Talking about the same, Bhasin said, “Our wealth of travel data gives us the advantage to leverage the power of data, often referred to as the new oil, to drive our analytical models.” This capability allows Sabre to offer real-time interaction and decision-making on a global scale. 

The real-time capabilities of the model is another differentiator making interactions faster and instant. 

Can AI Help in the Cancellation Crisis?

Even though the Indian aviation sector has seen remarkable growth over the past few years, with domestic air passenger traffic increasing by 13% annually to about 15.4 crore in 2023-24, major airline chains like IndiGo, Vistara, Air India, Akasa Air, and SpiceJet have faced challenges in maintaining flight schedules. As of the January-March 2024 period, flight cancellations and delays rose by 34%

Given this current situation, where flight cancellations are more common than ever, AI and ML can play a crucial role in mitigating these disruptions. According to Bhasin, re-accommodation logic can manage flight disruptions more efficiently. 

As Bhasin explained, “As airlines move further towards modern retailing, it will be important that a travellers’ personalised ‘bundle’ of products is kept together as an offer so that, even if disruption happens, the passenger still receives their expected ancillaries or experiences.” 

Additionally, robust optimisation strategies and technology are essential for creating schedules and routes that are less prone to disruptions.

Aviation companies are already using AI, especially generative AI for customer experience. For example, Air India introduced AI.g, a virtual travel assistant, powered by ChatGPT on WhatsApp. Even IndiGo launched an AI chatbot called 6Eskai. Both the models are built on OpenAI’s GPT-4. 

India as a Market

“The India centre plays a crucial role. In fact, a lot of our recent technology transformation initiatives, including our move to Google Cloud, were run out of the India centre,” commented Bhasin. This includes major initiatives like moving from mainframe systems to open systems and transitioning to cloud infrastructure.

The company has major GCCs in Krakow, Bengaluru, and Montevideo. In India,  Air India is one of its largest customers along with various travel agencies and hotels. The Bengaluru Global Capability Center (GCC) plays a crucial role in driving technological transformation. 

The Bengaluru GCC has been instrumental in these transformations, developing industry-leading solutions for the travel business and providing a strong IT backbone to global operations. 

Bhasin highlighted, “Several products from our retail intelligence suite, along with the leadership team behind it, including our MLOps practice, are also based out of our Bengaluru GCC.”

The company’s retail intelligence suite includes AI and ML-powered products like Air Price IQ and Ancillary IQ, which offer personalised and optimised air prices and ancillary services. For example, Air Serbia and Chile-based LATAM air services have co-innovated with Sabre to bring these products to market, resulting in substantial revenue increases. 

With Air Price IQ, our customers can expect up to a 3% revenue increase in flight revenue. With Ancillary IQ, it could be up to 10% of the overall ancillary revenue.

The Bengaluru GCC has been instrumental in these transformations, developing industry-leading solutions for the travel business and providing a strong IT backbone to global operations. 

Several products from the Retail Intelligence suite, along with the leadership team behind it, including our MLOps practice, are also based out of the Bengaluru GCC. The company is currently in talks with these customers to implement Sabre Travel AI solutions. 

“India is one of the most dynamic, complex, and fast-growing markets in the world right now, and it’s one of our key markets for 2024 and beyond. And our team in India is critical to our tech transformation and innovation efforts,” concluded Bhasin. 

The post Sabre Uses GenAI Tools to Boost Productivity & Innovation for 800 Software Engineers appeared first on AIM.

]]>
This Sequoia-Backed Startup Uses AI to Help You Sleep Better  https://analyticsindiamag.com/industry-insights/ai-startups/this-sequoia-backed-startup-uses-ai-to-help-you-sleep-better/ Mon, 17 Jun 2024 12:36:50 +0000 https://analyticsindiamag.com/?p=10123838

Besides consumer products, Wakefit is experimenting with models like OpenAI’s GPT-4 and Google’s Gemini to improve supply chain management, demand planning, and forecasting.

The post This Sequoia-Backed Startup Uses AI to Help You Sleep Better  appeared first on AIM.

]]>

Last week, at its first-ever media conference in India, Sequoia-backed Indian startup Wakefit announced its intention to improve sleep health and quality for Indian consumers, this time using AI. 

Regul8 is one of the flagship products of the company’s recently launched Wakefit Zense, the country’s first AI-powered sleep solutions suite. 

It is India’s first mattress temperature controller, which allows users to manually set the temperature between 15°C and 40°C or choose from presets like neutral, cold, warm, ice, and even fire. The icing on the cake: It also supports dual preferences, so individuals can customise their sides of the bed independently, eliminating the common household dispute over the AC remote.

On an average, a home air conditioner in India can use about 3,000 watts of electricity an hour in India. However, thanks to Wakefit’s recently launched Regul8, the sleep controller mattress, you don’t need an AC anymore. Most importantly, it is 60% more energy-efficient than a 1.5-ton air conditioner.

The other flagship product is an AI-powered contactless sleep-tracking device called Track8

Explaining the technology behind Track8, Yash Dayal, the CTO of Wakefit, told AIM,  “The tracker works by placing a passive sensor below the mattress, and as a person sleeps, it leverages ballistocardiography. This means that tiny vibrations from heartbeats, snoring, or any movement are read by our sensor. “These raw signals are then processed through AI and ML models to derive sleep metrics, restfulness, and other body vitals.”

Integrating both Track8 and Regul8 can create a system where temperature regulation is based on how you sleep. According to Dayal, the products, including AI and ML models, were made by the in-house team consisting of about 80 to 100 tech experts.  

Discussing the broader AI strategy, Dayal said, “We don’t rely on generative AI models for these consumer products, but we are experimenting with models like those from OpenAI and Gemini internally for efficient supply chain management, demand planning, and forecasting.”

What about Data Privacy? 

“Users have the option to choose whether to share their data or not. Their data is used solely to provide insights on sleep quality, such as heart rate variability, respiratory rate, movement index, and snoring index. It is anonymised, encrypted, and used only to improve their sleep quality,” said the director and co-founder Chaitanya Ramalingegowda, in an exclusive interview with AIM

Making High-Quality Products Affordable for All 

Founded in 2015, Wakefit’s philosophy is rooted in making high-quality products affordable for middle-class consumers. Led by co-founders Ankit Garg and Ramalingegowda, Wakefit has created a foundation for itself by building coming-of-age products like orthopaedic mattresses and dual comfort solutions, but at affordable prices. 

“Ankit and I come from middle-class backgrounds, so we understand the constraints of operating on a budget. From the beginning, our philosophy has been to make high-quality products affordable. For example, why should an orthopaedic memory foam mattress cost INR 50,000 when it can cost INR 12,000?” said Ramalingegowda.

This principle extends to their latest tech-enabled sleep solutions, which are designed to offer cutting-edge technology at a fraction of the cost of similar products in the US and Europe.

“The core idea was to leverage this success in non-tech products by applying material sciences, to create something uniquely beneficial for sleep,” added Ramalingegowda. 

“In markets like the US and Europe, similar products cost around INR 4.5 lakh. We built our product from the ground up to make it available for INR 45,000 to 50,000. We are middle-class folks building for middle-class India,” explained Ramalingegowda.

For example, high-end sleep technology brands like Eight Sleep offer smart mattresses with advanced sleep tracking and temperature control features, at around INR 2.8 lakh for the Pod 3 model. 

Similarly, Sleep Number provides adjustable mattresses with integrated sleep tracking and temperature regulation, often exceeding INR 3 lakh. Chili Technology and Bryte also offer premium sleep solutions with advanced features, with prices reaching up to approximately INR 3.75 lakh.

By offering its products at a mere fraction of that price, Wakefit positions itself as a cost-effective alternative to these luxury competitors, making sleep technology more accessible to the middle-class Indian market. This strategic pricing is an effort to bridge the gap between affordability and high-quality sleep solutions.

Future Prospects

Looking ahead, Wakefit plans to integrate generative AI and work with voice-based ecosystems to provide more actionable insights rather than passive data reporting. “Over the next two years, we will steadily launch new features and products, adding more value to our customers,” concluded Ramalingegowda. 

Wakefit’s commitment to innovation and accessibility positions it as a leader in the sleep technology market in India. With the launch of Wakefit Zense, the company is set to revamp how Indians experience sleep, making AI-powered sleep solutions an integral part of everyday life.

The post This Sequoia-Backed Startup Uses AI to Help You Sleep Better  appeared first on AIM.

]]>
TransUnion Bolsters Global Team with 55% Indian Tech Talent  https://analyticsindiamag.com/intellectual-ai-discussions/transunion-bolsters-global-team-with-55-indian-tech-talent/ Thu, 13 Jun 2024 12:36:45 +0000 https://analyticsindiamag.com/?p=10123586

The financial giant has over 25% of its workforce based in India.

The post TransUnion Bolsters Global Team with 55% Indian Tech Talent  appeared first on AIM.

]]>

In 2018, Chicago-based consumer credit reporting agency TransUnion, which has been in business for over five decades, opened its first global capability centre (GCC) in Chennai. Since then, the company has expanded its footprint in other Indian cities including Bengaluru, Pune, and Hyderabad.

“The success of our Chennai centre paved the way for us to expand our GCC network to six centres across three continents,” Debasis Panda, SVP and head, TransUnion GCCs (India, South Africa and Costa Rica), told AIM in a recent interaction. 

India GCC is the largest, making up over a quarter of the company’s workforce. 

The India centres house 55% of TransUnion’s technology talent, 56% of operations personnel, and 39% of analytics experts. 

This network, spanning India, South Africa, and Costa Rica, now employs over 4,000 associates and has become integral to TransUnion’s global operations. 

“These centres in India leverage local talent to support and enhance TransUnion’s core capabilities, including technology support, data analytics, business process management, and contact centre operations,” Panda explained. 

This strategic location allows TransUnion to expand its time zone and language coverage, facilitating global operations and contributing to economic growth in the regions it operates in.

India Team – Global Hub

India is the hub of TransUnion’s strategy, contributing to its global operations. The India GCCs operate as a microcosm of the enterprise, embedding almost every global function within their operations. 

These centres provide specialist capabilities across various domains, including product and platform solutions, data science and analytics, system architecture, intelligent automation, and business process management.

“The GCC India plays a pivotal role in TransUnion’s mission to migrate products and services to the cloud, ultimately transforming our operations by enabling streamlined product delivery and faster innovation,” said Panda. 

This shift has streamlined product delivery and faster innovation. The India team’s contributions have been instrumental in several modernisation initiatives. 

Recently, the team’s work on the OneTru platform, which leverages TransUnion’s data assets, cloud infrastructure, and AI capabilities, has significantly enhanced the company’s ability to deliver comprehensive and compliant consumer insights.

Another achievement of the team is revamping TransUnion’s solution enablement platform. This platform integrates separate data and analytic assets designed for credit risk, marketing, and fraud prevention into a unified environment. 

The India team supports internal infrastructure and development platforms. It allows consistent and secure development, deployment, and management of enterprise applications in a hybrid cloud environment.  

Leveraging Talent and Expertise

“India’s strong digital skills and infrastructure support a compelling growth and expansion opportunity for us,” explained Panda. 

The Indian talent offers a full stack of capabilities, including technology solutions, data science, and business process management. “This expertise allows TransUnion to provide leading-edge capabilities to our clients and colleagues worldwide,” the spokesperson said.

 However, due to a limited talent pool and intense competition in specialised areas like AI and ML, the company has introduced specialised initiatives to bolster hiring efforts and retain top talent.

For example, it has implemented a unique talent-focused operating model and a compelling Employee Value Proposition. This includes specialised training programs and a culture of continuous learning. The company also offers programs that support job mobility within the organisation, aiding career growth.

“Our unique operating model and continuous learning culture help us attract and retain top talent,” said Panda.

It also ensures a competitive advantage in attracting and retaining talent through several initiatives. For example, the ‘University Graduate Program’ develops the employment readiness of graduates, while the ‘TU Connect’ program offers comprehensive learning and problem-solving initiatives. 

Employee engagement, health and wellness programs, and CSR initiatives enhance employee satisfaction and retention. 

Additionally, a transparent internal career mobility framework promotes internal career opportunities, enabling 25% of associates to advance internally each year. These efforts are supported by a strong Employee Value Proposition, competitive benefits, and a flexible work environment.

Similarly, another core principle of the GCC in India is a focus on diversity, equity, inclusion, and belonging (DEIB). “Our focused efforts have resulted in a 10% increase in diversity ratio since 2018, with our current gender diversity ratio at 31%,” Panda highlighted.

AIM Media House is hosting its flagship GCC summit, MachineCon, in Bengaluru, on June 28, 2024. The event will feature over 100 GCC leaders, including Balaji Narasimhan, head of operations and GCC site leader at TransUnion. 

Don’t miss this opportunity—get your passes today!

The post TransUnion Bolsters Global Team with 55% Indian Tech Talent  appeared first on AIM.

]]>
7 Generative AI Use Cases in Education  https://analyticsindiamag.com/industry-insights/ai-in-education/7-generative-ai-use-cases-in-education/ Tue, 11 Jun 2024 06:28:26 +0000 https://analyticsindiamag.com/?p=10123177

Generative AI has become a saviour for edtech firms by enabling hyper-personalised learning experiences and enhanced learning facilities for students.

The post 7 Generative AI Use Cases in Education  appeared first on AIM.

]]>

Indian edtech is warming up to large language models (LLMs) to provide hyper-personalised learning experiences to students. Last year, Mayank Kumar, the co-founder and MD at upGrad, told AIM that the edtech firm was exploring the idea of building its own proprietary LLM.

ChatGPT has presented a significant challenge to edtech firms, forcing stakeholders to either embrace its capabilities or risk falling behind. 

At a time when some edtech platforms underestimated the power of ChatGPT and ended up on the cusp of failure, others, like Khan Academy, assessed the potential of generative AI early and embraced it.

Let’s look at some exciting use cases of generative AI in education.

Personalised Learning & Course Design

Personalised lesson plans ensure students receive effective education tailored to their needs and interests. AI-powered algorithms can generate these plans by analysing student data, such as past performance, skills, and feedback. 

Khan Academy’s AI tutor, Khanmigo, assists both teachers and students by not only providing answers but also guiding the learners to find the answers for themselves. With Khanmigo, teachers can differentiate instruction, create lesson plans and quiz questions, group students, and more.

Teaching Assistance

Generative AI can assist in creating new teaching materials, such as questions for quizzes and exercises, as well as explanations and summaries of concepts. This can be especially useful for teachers who need to create a large amount and variety of content for their classes. 

Furthermore, generative AI can facilitate the generation of additional materials to supplement the main course content, such as reading lists, study guides, discussion questions, flashcards, and summaries.

For instance, with Quizizz, an interactive learning platform, one can create interactive, multimedia-rich quizzes to boost student engagement.

Assess Learning Patterns 

AI analyses student performance data and identifies patterns of learning difficulties or gaps in understanding. Adaptive platforms use AI and machine learning (ML) algorithms to assess vast amounts of student performance data, which helps evaluate students’ strengths and weaknesses.

For instance, based on the specific requirements and skills of each student, BYJU’S offers tailored learning experiences through the use of an internal AI model, BADRI

It implements personalised “forgetting curves” to pinpoint each student’s strengths and weaknesses, providing customised questions and learning videos for areas of improvement.

Tutoring Powered by AI

Generative AI can be utilised to create virtual tutoring environments wherein students can interact with a virtual tutor and receive real-time feedback and support. This can be particularly beneficial for students lacking access to in-person tutoring.

Similarly, BYJU’s MathGPT model uses advanced machine learning algorithms to generate accurate solutions for complex mathematical challenges, including trigonometric proofs. 

Collaborative Learning Platforms

Apart from teaching assistance or content creation, generative AI can be used for collaborative learning since students find it easier to brainstorm ideas, engage in group discussions, and work together on projects with classmates worldwide. 

This feat is achievable through AI-powered virtual platforms that facilitate the exchange of unique ideas, perspectives, and insights, fostering out-of-the-box thinking.

BYJU’S has partnered with Google to provide a collaborative and personalised digital learning platform named ‘Vidyartha’ for schools. This partnership facilitates access to Google Classroom for seamless classroom management, organisation, and learning tracking. 

Restoring Old Learning Material 

Generative AI models are trained on large datasets of text to learn patterns and structures of language, allowing them to intelligently reconstruct missing or corrupted portions of text documents such as historical manuscripts, books, or course materials. 

Using techniques like GANs, generative AI can upscale and enhance the resolution and quality of old, low-resolution images and photographs used in educational materials.

Engaging Content Creation

Using foundational models enables the creation of diverse educational materials, like interactive stories, immersive simulations, and more. By leveraging captivating visuals generated using AI, learning becomes an engaging adventure for learners.

If teachers want to produce images tailored to their course requirements, for instance, NOLEJ, an AI-powered decentralised skill platform, offers an AI-generated e-learning capsule in just three minutes. This capsule features an interactive video and a glossary.

The post 7 Generative AI Use Cases in Education  appeared first on AIM.

]]>