AIM https://analyticsindiamag.com/ Artificial Intelligence, And Its Commercial, Social And Political Impact Tue, 03 Sep 2024 16:16:50 +0000 en-US hourly 1 https://analyticsindiamag.com/wp-content/uploads/2019/11/cropped-aim-new-logo-1-22-3-32x32.jpg AIM https://analyticsindiamag.com/ 32 32 Google Cloud Partners with ParallelDots to Enhance Retail Shelf Monitoring with AI  https://analyticsindiamag.com/ai-news-updates/google-cloud-partners-with-paralleldots-to-enhance-retail-shelf-monitoring-with-ai/ Tue, 03 Sep 2024 13:22:02 +0000 https://analyticsindiamag.com/?p=10134359

Customers will be able to integrate ParallelDots' shelf data within the Google Cloud platforms, eliminating the costly and complex task of manual integration.

The post Google Cloud Partners with ParallelDots to Enhance Retail Shelf Monitoring with AI  appeared first on AIM.

]]>

In a strategic announcement, Google Cloud collaborates with ParallelDots, one of the leaders in Retail Image Recognition solution, to deliver advanced, real-time AI solutions to global Consumer Packaged Goods (CPG) manufacturers and retailers. The goal is to employ the strengths of both the companies and unlock enhanced data accuracy and simplified AI training for CPGs. As a consequence of this, the in-store execution, customer satisfaction, and sales would increase. 

AI in Retail

Google Cloud ensures end-users with reliable performance and security due to its secure infrastructure. This integration will allow customers to deploy ParallelDots’ solutions quickly and easily. Customers will be able to  integrate ParallelDots’ shelf data within the Google Cloud platforms, eliminating the costly and complex task of manual integration.

This partnership helps combat losses related to the retail industry which is currently plummeting with a loss of 25% of sales annually due to poor in-store execution. Due to the limitations of manual store audits, there is also a dearth of  real-time data, efficient audits, and timely reporting. In such a context, advanced AI & IR solutions are gaining significance as they address problems like missing SKUs and price labels or incorrect product placement. 

Commenting on the partnership, Bikram Singh Bedi, Vice President and Country MD, Google Cloud India said, “Our aim is to offer our customers a secured infrastructure and capabilities to seamlessly perform complex tasks. With this collaboration we aim to empower ParallelDots to deliver unparalleled solutions to their customers. Our advanced cloud capabilities combined with their innovative technologies will simplify tasks, efficiently run complex problems and enhance cost efficiency.”

Ankit Singh, Co-founder & CTO, from ParallelDots is confident with this partnership, “This milestone marks a significant advance in delivering a world-class Retail Image Recognition solution to the global CPGs and retailers. Our partnership with Google Cloud enhances the reliability, security, and speed of our solutions, dramatically reducing deployment time, scaling our Image Recognition solution ShelfWatch, and boosting platform reliability and cost-effectiveness. This is a pivotal moment in our mission to create the world’s foremost retail shelf insights platform,” he said. Google Cloud’s strong presence in the retail space strengthens this collaboration in the marketplace. This joint solution stands as it allows end-customers to adopt ParallelDots’ technology, especially due to its platform security and robust infrastructure.

The post Google Cloud Partners with ParallelDots to Enhance Retail Shelf Monitoring with AI  appeared first on AIM.

]]>
The 10 Best Videos Created by MiniMax https://analyticsindiamag.com/ai-mysteries/the-10-best-videos-created-by-minimax/ Tue, 03 Sep 2024 12:45:55 +0000 https://analyticsindiamag.com/?p=10134339

The brand-new text-to-video model stands out due to its realistic AI visual content.

The post The 10 Best Videos Created by MiniMax appeared first on AIM.

]]>

MiniMax was launched as a text-to-video generator by a Chinese startup bearing the same name. The company recently launched its first model, Video-01. This model is designed to create high-resolution videos from text prompts. It operates at a native resolution of 1280 x 720 pixels and can generate videos at 25 frames per second. 

Currently, the maximum video length is limited to six seconds, with plans to extend this to ten seconds in the future. 

https://twitter.com/JunieLauX/status/1829950412340019261

During an interview, founder Yan Junjie mentioned that the company had made significant progress in video generation. However, the specific parameters and technical details of the model have not been disclosed yet. 

“We have indeed made significant progress in video model generation, and based on internal evaluations and scores, our performance is better than Runway,” Junjie said.

The current model is the initial version, with an updated version expected soon. As of now, it offers text-to-video capabilities, with future plans to include image-to-video and text-plus-image generation features.

Founded in 2021 by former employees of SenseTime, including Junjie. The startup is backed by Alibaba and Tencent. 

Here, we delve into some of the best videos generated by MiniMax. 

Magic coin 

The company released an official 2-minute AI film titled ‘Magic Coin’ generated entirely by its large model, showcasing a coin that appears and disappears within a person’s hand, illustrating the tool’s capability to seamlessly integrate AI-generated elements into real-world footage. This technology highlights the rapid advancements in AI video generation and its potential applications across various industries, including entertainment, advertising, and visual effects.

Cats eat fish, dogs eat meat

The MiniMax video contrasts the natural instincts of cats and dogs, with cats favouring fish and dogs preferring meat, while the camera slowly pans towards a robotic figure appearing at the window. The figure then watches the animals for a brief few seconds.

The video uses clever animations or real-life footage to highlight how each animal reacts to its favourite food. 

Man eating fast food

Here, it portrays a man indulging in a burger, capturing the rush and satisfaction of a quick meal against the backdrop of a food court. The video uses visuals and humorous elements to emphasise the speed at which the man consumes his food, reflecting a common modern-day scenario.

This highlights the advanced visual capabilities of AI in capturing and rendering detailed environments.

Teenager skateboarding 

The video showcases a teenager skateboarding through the city, highlighting the thrill and skill involved in navigating the streets, with dynamic camera angles and fast-paced editing. It also features iconic city landmarks, adding to the sense of adventure and exploration. 

Classical beauty applying lipstick

As the blurred camera comes back into focus, it features a beautiful Asian woman applying lipstick in her room. The video emphasises the delicate, precise motions as she applies the lipstick, with soft lighting and close-up shots showcasing her beauty. The room’s décor enhances the aesthetic appeal, reflecting a sense of elegance that might evoke royalty.

Pixel style

Amid the busy street in a bustling city, all rendered in pixel art style, a small cat walks by, weaving through the pixelated crowd and traffic. It cleverly contrasts the lively urban environment with the calm, making the cityscape vibrant and the cat’s presence even more striking. 

The precise features and seamless editing add to future creativity in AI.

Futuristic high-tech lab

Set in a futuristic high-tech lab, this MiniMax video shows a woman engaging in a conversation with a holographic figure. The sleek, advanced technology is highlighted with the hologram, possibly representing an AI or digital assistant, interacting fluidly with the woman, suggesting a seamless integration of human and machine communication. 

Blade Runner Cyber City

It begins with a sweeping view of a cyber city, characterised by towering skyscrapers, neon lights, and a vibrant, futuristic atmosphere. As the camera slowly shifts, it focuses on a man eating ramen at a roadside food truck, creating an intriguing contrast between the high-tech surroundings and the simplicity of street food. 

This scene emphasises the coexistence of advanced technology and everyday human experiences.

Silver 1977 Porsche 911 

Featuring a sleek, silver 1977 Porsche 911 Turbo cruising through a vibrant cyberpunk landscape, the car’s classic design contrasts strikingly with the futuristic, neon-drenched cityscape surrounding it. The video showcases the vehicle’s smooth motion against the backdrop of glowing backdrops, towering skyscrapers, and bustling streets. 

The contrast of the vintage car with the high-tech environment highlights a blend of old-world charm and futuristic aesthetics. 

Wig and sunglasses

A sad, bald man appears dejected and downcast. As the scene progresses, he puts on a wig and sunglasses, which instantly transform his mood. The video captures his transition from sadness to joy, highlighting the cheerful change in his expression and posture. 

AI ensures that each gesture and reaction is natural, enhancing the viewer’s engagement and the characters’ believability.

The post The 10 Best Videos Created by MiniMax appeared first on AIM.

]]>
PhysicsWallah’s ‘Alakh AI’ is Making Education Accessible to Millions in India https://analyticsindiamag.com/ai-origins-evolution/how-physicswallah-is-leveraging-openais-gpt-4o-to-make-education-accessible-to-millions-in-india/ Tue, 03 Sep 2024 11:34:34 +0000 https://analyticsindiamag.com/?p=10134333

“Today, 85% of the doubts are solved in real time."

The post PhysicsWallah’s ‘Alakh AI’ is Making Education Accessible to Millions in India appeared first on AIM.

]]>

India’s ed-tech unicorn PhysicsWallah is using OpenAI’s GPT-4o to make education accessible to millions of students in India. Recently, the company launched a suite of AI products to ensure that students in Tier 2 & 3 cities can access high-quality education without depending solely on their enrolled institutions, as 85% of their enrollment comes from these areas.

Last year, AIM broke the news of PhysicsWallah introducing ‘Alakh AI’, its suite of generative AI tools, which was eventually launched at the end of December 2023. It quickly gained traction, amassing over 1.5 million users within two months of its release.

The suite comes with several products including AI Guru, Sahayak, and NCERT Pitara. “AI Guru is a 24/7 companion available to students, who can use it to ask about anything related to their academics, non-academic support, or more,” said Vineet Govil, CTPO of PhysicsWallah, in an exclusive interview with AIM.

He added that the tool is designed to assist students by acting as a tutor, helping with coursework, and providing personalised learning experiences. It also supports teachers by handling administrative tasks, allowing them to focus more on direct student interaction.

Govil further explained that students can ask questions in any form—voice or image—using a simple chat format. “It’s a multimodal.”  He said that even if the lecture videos are long—about 30 minutes, 1 hour, or 2 hours—the AI tool will be able to identify the exact timestamp of the student’s query.

When discussing Sahayak, he explained that it offers adaptive practice, revision tools, and backlog clearance, enabling students to focus on specific subjects and chapters for a tailored learning experience.

“Think of Sahayak as a helper that assists students in creating study plans. Based on the student’s academic profile and the entrance exam they are preparing for, it offers suggestions on a possible plan to follow. It includes short and long videos, and a question bank,” said Govil.

On the other hand, NCERT Pitara uses generative AI to create questions from NCERT textbooks, including single choice, multiple choice, and fill-in-the-blank questions.

Moreover, PhysicsWallah has introduced a ‘Doubt Engine’ which can solve students’ doubts after class hours. These doubts can be either academic or non-academic. 

“Academic doubts can be further divided into contextual and non-contextual. Contextual doubts are those that our system can understand, analyse, and respond to effectively. Non-contextual doubts are the ones where we are uncertain about the student’s thought process,” explained Govil.

He said that with the help of the slides that the teacher uses to teach and the lecture videos, their model is also able to answer non-contextual doubts. “Today, 85% of the doubts are solved in real time. Previously, it used to take 10 hours for doubts to be resolved by human subject-matter experts.”

The company has also launched an AI Grader for UPSC and CA aspirants who write subjective answers. Govil said that grading these answers is challenging due to the varying handwriting styles, but the company has successfully developed a tool to address this issue.

“Over a few months, we have done a lot of fine-tuning. Today, we are able to understand what a student is writing. At the same time, some students may use diagrams, and we are able to identify those as well,” said Govil.

The Underlying Tech

Govil said that they use OpenAI’s GPT-4o. Regarding the fine-tuning of the model, he said the company has nearly a million questions in their question bank. “We have over 20,000 videos in our repository that are being actively used as data,” he added.

On the technology front, he said that the company has developed its own layer using the RAG architecture. “And we have a vector database that allows us to provide responses based on our own context,” he said.

PhysicsWallah built a multimodal AI bot powered by Astra DB Vector and LangChain in just 55 days. 

Talking about the data resources for RAG, Govil said, “Our subject matter experts (SMEs) regularly update the data, including real-time current affairs and question banks. This continuous updating has helped us build a question bank with over a million entries.”

When asked about LLMs not being good at maths, Govil agreed and said “It’s a known problem that all the LLMs available today are not doing a great job when it comes to reasoning, and we are aware of it.”

“We are working with partners leading in the LLM space. At the same time, this is really an issue only for high-end applications. For day-to-day algebra and mathematical operations, they are performing well,” he added. 

Alakh AI is Not Alone

OpenAI former co-founder Andrej Karpathy recently launched his own AI startup, Eureka Labs, an AI-native ed-tech company. Meanwhile, Khan Academy, in partnership with OpenAI, has developed an AI-powered teaching assistant called Khanmigo, which utilises OpenAI’s GPT-4. 

Speaking of its global competitors, Govil said, “I won’t really like to compare [ourselves] with the others, but I can tell you that the kind of models we have, and the kind of responses and the skill at which we are operating, are not seen elsewhere.”

Moreover, recent reports indicate that Lightspeed Venture Partners will lead a $150 million funding round for PhysicsWallah at a valuation of $2.6 billion. 

In conclusion, PhysicsWallah’s innovative suite of tools under the Alakh AI umbrella, which includes Sahayak, AI Guru, and the Doubt Engine, is set to reshape the ed-tech industry with its advanced features and real-time capabilities.

The post PhysicsWallah’s ‘Alakh AI’ is Making Education Accessible to Millions in India appeared first on AIM.

]]>
Tech Veteran Jaspreet Bindra launches ‘AI&Beyond’ to Democratise AI Literacy  https://analyticsindiamag.com/ai-news-updates/tech-veteran-jaspreet-bindra-launches-aibeyond-to-democratise-ai-literacy/ Tue, 03 Sep 2024 10:39:09 +0000 https://analyticsindiamag.com/?p=10134325

This initiative aims to empower businesses and individuals to keep up with the AI revolution.

The post Tech Veteran Jaspreet Bindra launches ‘AI&Beyond’ to Democratise AI Literacy  appeared first on AIM.

]]>

Jaspreet Bindra, a tech evangelist and author, launched his AI literacy initiative “AI&Beyond” with the goal to make AI more accessible and understandable. It is in collaboration with Anuj Magazine, an AI & cybersecurity expert, it is spread across industries. 

“At AI&Beyond, we believe that AI is no longer the future – it is very much the present. Our goal is to ensure that AI literacy becomes as fundamental as reading and arithmetic, especially in large organisations where AI’s impact will be profound. Through AI&Beyond, we aim to bridge the gap between AI’s capabilities and its practical application across various sectors,” said Jaspreet Bindra. 

Inspired by CLR James’ book Beyond A Boundary, AI&Beyond aims to simplify and distill the concept of AI beyond the technical aspects of it. 

One of the initiative’s flagship programme, The Generative AI Bootcamp,  is designed to equip participants with the necessary and most important AI skills. Additionally, the platform’s Ethics Bootcamp is crucial in today’s AI-driven world, helping organisations not only adopt AI, but do so with a strong ethical foundation. 

The underlying idea is to make the learning experiential and immersive – that form the foundation of future AI businesses. These resources – including workshops, briefings, webinars, and consulting services – would empower organisations to become more agile, innovative and competitive in the digital age.

Jaspreet Bindra on GenAI

Bindra has been a staunch advocate of Make in India, and to that end, proposed that India should consider building generative AI as a digital public good – known as JanAI or GenAI for the people. “If you look at ChatGPT or Bard, they are all trained on the internet, where almost 80 to 90% of the data is English, and West-oriented. It doesn’t have vernacular data nor an Indian context,” said Bindra, while emphasising on why in the current LLM market, he wanted to bring an Indian context to the training model. 

Also, at MLDS 2024 – India’s biggest generative AI summit – hosted by AIM, Jasprit Bindra spoke about the history of AI and the concept of singularity in today’s generative trends. 

The post Tech Veteran Jaspreet Bindra launches ‘AI&Beyond’ to Democratise AI Literacy  appeared first on AIM.

]]>
Why AI Can’t Get Software Testing Right https://analyticsindiamag.com/developers-corner/why-cant-ai-tools-get-programming-tests-right/ Tue, 03 Sep 2024 10:31:39 +0000 https://analyticsindiamag.com/?p=10134321

It’s already a danger when you write the implementation first; AI is only going to make it worse.

The post Why AI Can’t Get Software Testing Right appeared first on AIM.

]]>

Writing unit tests was already a headache for developers, and AI is making it worse. A recent study has unveiled a critical weakness in LLMs: their inability to create accurate unit tests. 

While ChatGPT and Copilot demonstrated impressive capabilities in generating correct code for simple algorithms (success rates ranging from 63% to 89%), their performance dropped significantly when tasked with producing unit tests which are used to evaluate production code.

ChatGPT’s test correctness fell to a mere 38% for Java and 29% for Python, with Copilot showing only slightly better results at 50% and 39%, respectively.

According to a study published by GitLab in 2023, automated test generation is one of the top use cases for AI in software development, with 41% of respondents currently using it. However, this recent study is now questioning the quality of those tests. 

A fullstack developer named Randy on Daily.dev forum mentioned that he had tried AI for both writing code and writing unit tests, and it failed miserably as it does not understand testing frameworks like Groovy and Spock.

Reason Why AI is Poor at Software Testing

AI-generated tests often lack the necessary context and understanding of specific requirements and nuances of a given codebase. Due to this, AI may result in an increase of “tautological testing” – tests that prove the code does what the code does rather than proving it’s doing what it’s supposed to do.

“It’s already a danger when you write the implementation first; AI is only going to make it worse,” a user explained in the Reddit discussion.

Moreover, relying on AI for test writing can lead to a false sense of security, as generated tests may not cover all critical scenarios, potentially compromising the software quality and reliability.

When an AI is asked to write unit tests for code that contains a bug, it typically doesn’t have the ability to identify that bug. Instead, it treats the existing code as the “correct” implementation and writes tests that validate the current behavior – including the bugs, if any.

Instead, the developer says that a better use for AI would be to ask it, “What are all the ways that this code can fail?” Instead of having it write tests, have it identify things you might have missed.

Another report by researchers from the University of Houston, suggested similar numbers as ChatGPT-3.5. Only 22.3% of generated tests were fully correct, and 62.3% were somewhat correct. 

Besides, the report noted that LLMs struggle to understand and write OpenMP and MPI unit tests due to the inherent complexity and domain-specific nature of parallel programming. Also, when provided with “too much” context, LLMs tended to hallucinate, generating code with nonexistent types, methods, and other constructs.

“Like other LLM-based tools, the generated tests are a “best guess” and developers shouldn’t blindly trust them. In many cases, additional debugging and editing are required,” said Ruiguo Yang, the founder of TestScribe. 

When developers consider making new test cases, AI still has a hard time doing that. With their creative problem-solving skills, human testers still need to make thorough test plans and define the overall testing scope.

But What is the Solution?

To solve this problem, researchers from the University of Houston used the LangChain memory method. They passed along smaller pieces of the code as a guide, allowing the system to fill in the rest, similar to how autocomplete works when you’re typing.

This proves that one of the most effective ways to tackle this problem is providing more context to the AI models, such as the full code or associated libraries, which significantly improves the compilation success rate. For instance, with ChatGPT, the increase was from 23.1% to 61.3%, and for Davinci, it was almost 80%.

In recent times, tools like Cursor are helping developers build code without any hassle and in future, we might see these tools building better unit tests along with production code. 

But for now, while AI can generate tests quickly, having an experienced engineer will remain crucial to assess the quality and usability of AI-generated code or tests.

The post Why AI Can’t Get Software Testing Right appeared first on AIM.

]]>
Will AI Coding Tools Mark the End of IDEs? https://analyticsindiamag.com/developers-corner/will-ai-coding-tools-mark-the-end-of-ides/ Tue, 03 Sep 2024 09:47:42 +0000 https://analyticsindiamag.com/?p=10134313

All IDEs will soon be AI assisted.

The post Will AI Coding Tools Mark the End of IDEs? appeared first on AIM.

]]>

Do we even need to learn coding anymore? The question sounds very timely as tools such as Cursor and Claude Artifacts are making everyone build apps without writing a single line of code. Cursor, which is also basically a glorified fork of the VS Code IDE, is making developers wonder if this transition to building apps in natural language marks the end of traditional IDEs.

Register for NVIDIA AI Summit India

IDE, or integrated development environment, is a code editor that allows developers to write, test, and debug code, combining multiple tools and features into a single environment. IDEs are essential for using programming languages and making useful software. 

The most famous IDE, VS Code, also allows developers to edit code such as IntelliSense code completion. However, with AI editors like Cursor, Zed, Magic, Codeium, or the most recent one Melty, integrating traditional IDEs into a developer’s workflow seems to have become redundant. 

Or Has It?

There was a recent surge of developers actively uninstalling VS Code because of Cursor IDEs. But it is easily possible for VS Code to simply add an update of AI-assisted coding onto its platform, which would eventually mark the end of Cursor. Some people also predict that Microsoft might simply end up acquiring Cursor in the future. 

John Lindquist, creator of egghead.io, said that he had a chat with Harald Kirschner, project manager at VS Code, about Cursor and VS Code recently, and the team is keenly aware of Cursor’s capabilities and there might be several things in the pipeline. “I think we’ll all be pleasantly surprised,” he said.

Other IDEs such as Jetbrains, PyCharm, and IDLE, are also facing a similar crisis as AI-generated coding gains traction. 

Regardless, modern generative AI coding tools are capable of integrating several open-source LLMs instead of a traditional IDE like VS Code, making it handy for several AI developers. “You can select code and ask questions based on that piece of code. So you don’t have to keep switching between the IDE and browser,” explained a developer on X.

But it simply means that most of the IDEs in the future would come with generative AI integration. It would become the default for IDEs as it was for integrating no-code and low-code platforms. 

Moreover, though it is becoming easier for non-developers to build apps, the task of building high-end software is still far away from these autocoding platforms. These tools can enable non-developers or those with minimal coding experience to create apps without needing to interact with an IDE, but those cannot be replicated everywhere. 

When it comes to experienced developers, AI tools can fasten the prototyping process by generating samples of code through prompts to quickly test ideas. But in the long run, when it comes to dealing with complex, customised, and large-scale projects, traditional IDEs still are better at debugging and handling crucial features.

The Future of IDEs is AI-Assisted

Before talking about the end of IDEs, it is essential to understand what they stand for. With IDEs, developers have a standardised environment with a defined codebase for meeting specific requirements. This consistency of code cannot be fully controlled by an automated code generation platform like Cursor, or others. 

On the contrary, future IDEs could take this further by generating larger chunks of code, or even entire modules, based on brief descriptions or prompts. The future of software development is likely to see AI-integrated IDEs becoming the norm. This would be helpful in analysing code in real-time and automatically correcting them. 

Furthermore, with generative AI integrated into IDEs, these code editors would be able to suggest context-aware code, which is not just based on the syntax, and even optimise it. This would also enable a personalised developer style of coding, enabling natural language prompting with coding, much like Cursor and Claude. 

AI-integrated IDEs will likely combine traditional development tools with generative coding, making software development more efficient, intuitive, and accessible while still providing the depth needed for complex projects. This would enable developers who know coding to take generative tools to the next level. 

But at the same time, it would be difficult to manage generative AI code if it is available within the IDEs as people still would be hesitant to trust generative AI for boilerplate code. It could also result in the loss of coding skills for skilled developers. 

The post Will AI Coding Tools Mark the End of IDEs? appeared first on AIM.

]]>
HARMAN Introduces ForecastGPT, a GenAI Platform for Enterprises https://analyticsindiamag.com/ai-news-updates/harman-introduces-forecastgpt-a-genai-platform-for-enterprises/ Tue, 03 Sep 2024 08:15:55 +0000 https://analyticsindiamag.com/?p=10134303

Designed specifically for uncertain and volatile markets, ForecastGPT leverages AI to help businesses make accurate predictions and informed decisions.

The post HARMAN Introduces ForecastGPT, a GenAI Platform for Enterprises appeared first on AIM.

]]>

HARMAN, a Connecticut-based Samsung subsidiary, specialising in audio electronics, launched HARMAN ForecastGPT, a predictive analytics platform that enables organisations to become more efficient through future forecasting and optimum resource allocation. It is aimed at understanding complex data patterns, and some of its chief features are advanced AI capabilities, real-time adaptability and the ability to seamlessly integrate with various platforms. It is compatible with any data format and source including CSV, Excel, SQL, and API.

HARMAN ForecastGPT’s Versatility 

It can benefit businesses in a plethora of ways including sales forecasting, supply chain forecasting, financial planning, and marketing even. 

“Embracing AI is imperative for business success and at HARMAN DTS, we are pioneering the application of AI to deliver tangible, bottom-line results. By understanding the unique challenges and aspirations of each client, we’re crafting AI solutions that go beyond generic predictions. The ForecastGPT platform is a testament to our commitment to equipping businesses with the tools to move past challenging roadblocks and fully capitalise on the potential of AI,” said Nick Parrotta, President – Digital Transformation Solutions & Chief Digital and Information officer at HARMAN.  

HARMAN & its Growing Capabilities 

HARMAN has a wide portfolio including audio and video systems, car audio, connected car solutions, professional audio and lighting equipment, and more. Additionally, it is also the parent company to brands like JBL, Harman Kardon, AKG, Mark Levinson, and Infinity Systems. 

“HARMAN’s data science team has made significant contributions by incorporating machine learning and deep learning models into a range of applications, such as predictive analytics, computer vision, NLP, and graph analytics,” said Dr Jai Ganesh, chief product officer of HARMAN, in an exclusive interview with AIM previously

The data science team of HARMAN employs both open source as well as commercial tools such as Python, TensorFlow, PyTorch, AWS, Azure, or Google Cloud Platform, Java, C++, Git, Jenkins, Docker, Kubernetes, R, Jupyter, SAS, MongoDB, Spark, Kafka, MySQL, RStudio, KNIME, RapidMiner, H2O etc.

The post HARMAN Introduces ForecastGPT, a GenAI Platform for Enterprises appeared first on AIM.

]]>
Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies https://analyticsindiamag.com/ai-origins-evolution/channel-specific-and-product-centric-genai-implementation-in-enterprises-leads-to-data-silos-and-inefficiencies/ Tue, 03 Sep 2024 07:41:20 +0000 https://analyticsindiamag.com/?p=10134300

Pega employs ‘situational layer cake’, which, as a part of its exclusive centre-out architecture, helps adapt microjourneys for different customer types, lines of business, geographies, and more.

The post Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies appeared first on AIM.

]]>

Organisations often struggle with data silos and inefficiencies when implementing generative AI solutions. This affects over 70% of enterprises today, but global software company Pegasystems, aka Pega, seems to have cracked the code by using its patented ‘situational layer cake’ architecture. 

This approach democratises the use of generative AI across its platform, allowing clients to seamlessly integrate AI into their processes. They can choose from any LLM service provider, including OpenAI, Google’s Vertex AI, and Azure OpenAI Services, thereby ensuring consistent and efficient AI deployment across all business units.

“Our GenAI implementation at the rule type levels allows us to democratise the use of LLMs across the platform for any use case and by mere configuration, our clients can use any LLM service provider of their choice,” said Deepak Visweswaraiah, vice president, platform engineering and side managing director at Pegasystems, in an interaction with AIM

Pega vs the World 

Recently, Salesforce announced the launch of two new generative AI agents, Einstein SDR Agent and Einstein Sales Coach Agent, which autonomously engage leads and provide personalised coaching. This move aligns with Salesforce’s strategy to integrate AI into its Einstein 1 Agentforce Platform, enabling companies like Accenture to scale deal management and focus on complex sales.

Salesforce integrates AI across all key offerings through its unified Einstein 1 Platform, which enhances data privacy, security, and operational efficiency via the Einstein Trust Layer. 

“We have generative AI capabilities in sales cloud, service cloud, marketing cloud, commerce cloud, as well as our data cloud product, making it a comprehensive solution for enterprise needs,” said Sridhar H, senior director of solution engineering at Salesforce.

SAP’s generative AI strategy, on the other hand, centres around integrating AI into core business processes through strategic partnerships, ethical AI principles, and enhancing its Business Technology Platform (BTP) to drive relevance, reliability, and responsible AI use across industries.

“We are adding a generative AI layer to our Business Technology Platform to address data protection concerns and enhance data security,” stated Sindhu Gangadharan, senior VP and MD of SAP Labs, underscoring the company’s focus on integrating AI with a strong emphasis on security and business process improvement.

Oracle, on the other hand, focuses on leveraging its second-generation cloud infrastructure, Oracle Cloud Infrastructure (OCI). It is designed with a unique, non-blocking network architecture to support AI workloads with enhanced data privacy while extending its data capabilities across multiple cloud providers.

“We’re helping customers do training inference and RAG in isolation and privacy so that you can now bring corporate sensitive, private data…without impacting any privacy issue,” said Christopher G Chelliah, senior vice president, technology & customer strategy, JAPAC at Oracle.

Meanwhile, IBM has watsonx.ai, an AI and data platform designed to help companies integrate, train, and deploy AI models across various business applications.

IBM’s generative AI strategy with watsonx.ai differentiates itself by offering extensive model flexibility, including IBM-developed (Granite), open-source (Llama 3 and alike), and third-party models, along with robust client protection and hybrid multi-cloud deployment options. At the same time, Pega focuses on deeply integrating AI within its platform to streamline business processes and eliminate data silos through its unique situational layer cake architecture.

Pega told AIM that it distinguishes itself from its competitors by avoiding the limitations of the traditional technological approaches, which often lead to redundant implementations and data silos. “In contrast, competitors might also focus more on channel-specific designs or product-centric implementations, which can lead to inefficiencies and fragmented data views across systems,” said Visweswaraiah. 

Situational Layer Cake Architecture 

Pega told AIM that its approach to integrating GenAI processes into business operations is distinct due to its focus on augmenting business logic and decision engines rather than generating code for development. 

It employs the situational layer cake architecture, which as a part of Pega’s exclusive centre-out architecture, helps to adapt microjourneys for different customer types, lines of business, geographies, and more. 

“Our patented situational layer cake architecture works in layers making specialising a cinch, differentiating doable, and applying robust applications to any situation at any time, at any scale,” said  Visweswaraiah.

He added that enterprises can start with small, quick projects that can grow and expand over time, ensuring they are adaptable and ready for future challenges.

In addition to this, the team said it has the ‘Pega Infinity’ platform, which can mirror any organisation’s business by capturing the critical business dimensions within its patented situational layer cake. 

“Everything we build in the Pega platform, processes, rules, data models, and UI is organised into layers within the situational layer cake. This means that you can roll out new products, regions, or channels without copying or rewriting your application,” shared  Visweswaraiah. 

He further said that the situational layer cake lets you declare what is different and only what is different into layers that match each dimension of your business. 

Simply put, when a user executes the application, the Pega platform slices through the situational layer cake and automatically assembles an experience that is tailored exactly to that user’s context. 

Visweswaraiah believes that this architecture has given them a great opportunity to integrate GenAI into the platform at the right layers so it is available across the platform. 

The post Channel-Specific and Product-Centric GenAI Implementation in Enterprises Leads to Data Silos and Inefficiencies appeared first on AIM.

]]>
These Ex-Apple Employees from India are Building the Foundation Model for Robotics https://analyticsindiamag.com/ai-breakthroughs/these-ex-apple-employees-from-india-are-building-the-foundation-model-for-robotics/ Tue, 03 Sep 2024 05:16:35 +0000 https://analyticsindiamag.com/?p=10134281

Backed by Khosla Ventures, Lockheed Martin Ventures and others, the startup is developing safe, affordable, and intelligent robots.

The post These Ex-Apple Employees from India are Building the Foundation Model for Robotics appeared first on AIM.

]]>

The next wave of AI is likely to be Physical AI or Embodied AI. Today, a number of startups are focused on building this technology, including the California-based Vayu Robotics, which is strategically accelerating autonomy.

Safety, Cost, and Autonomy

“If autonomy is ever to make it into the world and gain trust and credibility among the people, then it has to start at a very safe spot. So the biggest knob that we have to dial up is the safety, [by] reducing the mass and speed of the robot,” said Mahesh Krishnamurthi, the co-founder and chief product officer of Vayu Robotics, in an exclusive interaction with AIM.  

“The kinetic energy of our robot is almost 1000% lower than the kinetic energy of a truck going on the road. So, by reducing the kinetic energy and the momentum of the robot, we inherently get very high safety,” he said. 

Addressing safety is one aspect, but another important focus of the startup is developing new sensors that are both low-cost and high-quality.

“So, we thought, maybe we could take a bet on this sensor and say that maybe in a few years, we would be able to build a sensor that is super low-cost, but very high in quality. That would bridge the gap between a very expensive LiDAR system, which is very high quality, and an inexpensive camera system that is not as safe as a LiDAR. 

“We tried to solve this problem and came up with the technological breakthrough,” said Krishnamurthi. 

Vayu Robotics is focusing on developing an intelligent drive agent that requires less capital to build by using simulated data for training, unlike Tesla’s approach, which relies on real-world data. 

Krishnamurthi believes that in recent years, simulators have advanced to the point where their output is nearly indistinguishable from reality, allowing them to create driving behaviours in simulations and seamlessly apply them into the real world.

“So that’s another technology breakthrough that has enabled us to build a product like a delivery robot driving like a bicyclist for less than $5,000,” he added. 

The startup has developed six specialised simulators, each tailored to specific environments. These include one for indoor environments, one for outdoor settings, another for bike lanes, and yet another for the highways. 

Additionally, they have a dedicated simulator purely for validation purposes, which is kept separately from the training process. 

Vayu and Strength

Inspired by the Sanskrit word ‘vayu’, which means intelligence behind all motion and energy, Vayu Robotics is building to enable autonomous mobile robots to move through the world. Not just that, the founding team comprises members with decades of experience in the autonomous segments. 

Krishnamurthi comes with a deep-tech background, beginning at Intel Labs in 2008 and moving on to Apple, where he worked on autonomous systems. Specialising in optoelectronics and LiDAR technology, he later joined Lyft. 

He eventually co-founded Vayu Robotics in 2022 with his friends Anand Gopalan, who was the CEO of Velodyne Lidar, and Nitish Srivastava, a former colleague at Apple. 

“Since we have collectively worked on this problem for about 25 years, we kind of had an insight on what works, what doesn’t, and what just might,” said Krishnamurthi. 

The startup closed its seed financing round in October last year with backing from some of the biggest VC players, including Khosla Ventures, Lockheed Martin Ventures, ReMY Investors and others. 

“Khosla himself is one of the big backers of this vision. He loves the idea and he is the best backer we could have as a startup. A true visionary of what the world needs to look like. We are fortunate to have him,” said Krishnamurthi. 

Over the next few years, the startup is focusing on commercialising and scaling their technology, with a contract to deploy up to 2,500 robots across multiple US cities in phases. It will be a scaled ramp-up, with set milestones. 

The startup will also focus on building sensors as a product on its own. “We will also try to have some ancillary applications for just the sensor,” he said. 

While a number of robotic startups are emerging in this space, Krishnamurthi doesn’t perceive it as competition, rather a combined effort to solve a common problem. 

“I do feel like it’s a good time for the industry and people in general because more people working on a problem increases the probability of it getting solved,” he said. 

Without revealing the specifics, Krishnamurthi mentioned that their primary customer is a multi-billion-dollar e-commerce company, and another major fan company has expressed interest in their sensing product, making them another key customer.

The post These Ex-Apple Employees from India are Building the Foundation Model for Robotics appeared first on AIM.

]]>
“When Will Mira Get Married?” OpenAI CTO’s Mother Asked ChatGPT https://analyticsindiamag.com/ai-news-updates/when-will-mira-get-married-openai-ctos-mother-asked-chatgpt/ Tue, 03 Sep 2024 04:57:11 +0000 https://analyticsindiamag.com/?p=10134278

Paytm founder Vijay Shekhar Sharma took to X and said, “#MomsEverywhere.”

The post “When Will Mira Get Married?” OpenAI CTO’s Mother Asked ChatGPT appeared first on AIM.

]]>

OpenAI’s Mira Murati shared that the first time her mother used ChatGPT, she asked, “When will Mira get married?”. 

In a recent interview at Cannes Lion 2024, Murati recalled that when she first introduced her mother to ChatGPT in 2022, she was with her sister in Italy. She explained to her mother that she could ask any question she wanted,even in Albanian. Her mother’s first question to ChatGPT was about Mira’s marriage. To this, Mira’s sister humorously responded, “Mom, it’s not magic, it’s artificial intelligence”.

“She was having such a natural interaction with it that she thought she could ask anything,” said Murati. 

Paytm founder Vijay Shekhar Sharma took to X and said, “#MomsEverywhere.”

“didn’t know she was Indian,” quipped a user on X. 

Another user humorously remarked, “The most Albanian mother interaction ever. Recalling OpenAI’s leadership transition when Mira Murati was appointed Interim CEO during the Sam Altman kerfuffle, the user shared,”I told my mom, “Look, an Albanian woman is the CEO of one of the most innovative companies in the world.”

His mother’s response? “Message her, ask her on a date, and for a job.”

Since joining OpenAI in 2018, Murati has been instrumental in leading the development of ChatGPT, DALL-E, and the GPT series. Her leadership has been recognised by industry leaders, including Microsoft CEO Satya Nadella, for her ability to blend technical expertise with a deep appreciation for OpenAI’s mission of responsible AI development.

Murati briefly served as interim CEO of OpenAI in November 2023, following the unexpected removal of Sam Altman. Her tenure as interim CEO was short-lived, as Emmett Shear replaced her three days later, and Altman was reinstated five days after his removal. 

The post “When Will Mira Get Married?” OpenAI CTO’s Mother Asked ChatGPT appeared first on AIM.

]]>
Meet Melty, Open Source Alternative to Cursor  https://analyticsindiamag.com/ai-news-updates/meet-melty-open-source-alternative-to-cursor/ Tue, 03 Sep 2024 04:08:20 +0000 https://analyticsindiamag.com/?p=10134274

Started by Charlie Holtz and Jackson de Campos, Melty is backed by Y Combinator and is part of the S24 batch.

The post Meet Melty, Open Source Alternative to Cursor  appeared first on AIM.

]]>

With AI code editors like Cursor, Zed, and Codeium, gathering all the attention, there is an open source alternative recently launched in the market. Meet Melty, an open source AI code editor, which is specifically designed for 10x engineers.

Register for NVIDIA AI Summit India

Started by Charlie Holtz and Jackson de Campos, Melty is backed by Y Combinator and is part of the S24 batch. The founders tout it is the first editor that understands what developers are coding from the terminal to GitHub and collaborates for writing production ready code.

Click here to check out the GitHub repository for Melty

In a thread on X, Holtz said that during the first weeks of YC they were still figuring out a bunch of ideas when they started working on AI developer tools as suggested by Aaron Epstein. “We’re big fans of the tools that are out there. But we still find ourselves copy-pasting from Claude, juggling ten chats for the same task, and committing buggy code that comes back to bite us later. Nothing is quite ‘it’ yet,” said Holtz.

After working on Melty for 28 days, Holtz said that Melty is already writing about half of its own code.

“Charlie and Jackson build so fast you’d think they’re a team of 10 stacked engineers, rather than just 2 co-founders. Melty’s going to give every engineer their superpowers,” said Epstein. 

The goal that the team explains of building Melty is to help people understand the code by allowing them to watch every change like a pair programmer and it also learns to adapt to the developer building the code. 

The post Meet Melty, Open Source Alternative to Cursor  appeared first on AIM.

]]>
Microsoft Appoints Former AWS Head Vaishali Kasture as GM for India and South Asia https://analyticsindiamag.com/ai-news-updates/microsoft-appoints-former-aws-head-vaishali-kasture-as-gm-for-india-and-south-asia/ Mon, 02 Sep 2024 13:39:40 +0000 https://analyticsindiamag.com/?p=10134269

With over 25 years of leadership experience, Kasture will oversee operations at a time when technology adoption is accelerating across both corporate and SMB sectors. 

The post Microsoft Appoints Former AWS Head Vaishali Kasture as GM for India and South Asia appeared first on AIM.

]]>

Microsoft has announced the appointment of Vaishali Kasture as the General Manager for its Small, Medium, and Corporate (SMC) business in India and South Asia. With over 25 years of leadership experience, Kasture will oversee operations at a time when technology adoption is accelerating across both corporate and SMB sectors. 

Kasture previously served as the Head of AWS India and South Asia. She will be the second high-ranking executive to leave AWS for Microsoft within the past year, as the companies vie for a larger share of the cloud market in India. Her predecessor at AWS, Puneet Chandok, became the head of Microsoft India and South Asia in September last year.

“I cannot think of a better time to take up the role as GM for the SMC business in India and South Asia,”said Kasture on her appointment to the new role.  She highlighted Microsoft’s comprehensive approach to technology, focusing on areas such as cloud computing, artificial intelligence, data, and security.

“Microsoft is singularly positioned as a full-service technology company, focusing on an end-to-end technology stack,” Kasture added, citing the company’s ability to enhance productivity through generative AI.

Kasture also referenced Microsoft CEO Satya Nadella’s vision, quoting him: “Microsoft is like the United Nations of Software. Helping people and organizations to do more, to achieve more.”

India, described by Kasture as “the fastest-growing large economy in the world,” presents significant opportunities for Microsoft, which has bolstered its presence in the country over the past three decades. 

Kasture will collaborate with key figures including Chandok and Rachel Bondi to advance Microsoft’s vision in the region. She expressed gratitude towards Ahmed Mazhari and Kevin Peesker for their support and partnership during her transition into the role.

The appointment underscores Microsoft’s commitment to expanding its footprint in India and South Asia, aligning with the growing demand for technology-driven solutions in the region.

The post Microsoft Appoints Former AWS Head Vaishali Kasture as GM for India and South Asia appeared first on AIM.

]]>
Anthropic Claude Artifacts to Kill App Store Soon  https://analyticsindiamag.com/ai-origins-evolution/anthropic-claude-artifacts-to-kill-app-store-soon/ Mon, 02 Sep 2024 11:18:54 +0000 https://analyticsindiamag.com/?p=10134256

While the OpenAI's Plugins Store was billed as an ‘iOS App Store moment’, it failed to meet the expectations and ended up being a hot mess. 

The post Anthropic Claude Artifacts to Kill App Store Soon  appeared first on AIM.

]]>

Anthropic recently made Claude Artifacts available to all users on iOS and Android, allowing anyone to easily create apps without writing a single line of code. AIM tried its hand at it and successfully created a Cricket Quiz game, Temple Run, and Flappy Bird, all with a single line of prompt in English. 

Debarghya (Deedy) Das, principal at Menlo Ventures, used Artifacts to build a Splitwise-like app. “With Claude launching on iOS today, I can now generate the Splitwise app instead of paying for Pro,” he said

“Claude Artifacts allows you to go from English to an entire app and share it!” he added, saying that his friend, a product manager who couldn’t code, now creates apps in minutes. “The cost of a lot of software is nearing ~$0.”

This brings us to question if this could be the end of App Stores. Groq’s Sunny Madra thinks this is the beginning of “The Build Your Own (BYO) era. Since Artifacts are shareable, anyone can use the apps you build, and they can be shared on any social media platform as a link.

Several users experimented with Claude Artifacts by building different apps. 

“Claude 3.5’s artifacts, now shareable, can help teach. In class, startup financing can be hard to explain. Now I just asked, “Create an interactive simulation that visually explains payoff differences for a startup and VC with liquidation preference…” Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, wrote on X

Similarly, Allie K Miller, AI advisor and angel investor, used it to build a calendar and an AI quiz, which took less than two minutes! 

The best part about Artifacts is that it is mobile-friendly and responsive. “Using Claude 3.5 Sonnet, you can generate artifacts (e.g., code snippets, text documents, or website designs) and iterate on them right within the same window,” exclaimed Elvis Saravia, the co-founder of DAIR.AI. 

On-Demand Software

When using mobile phones, we often search for apps that can solve our specific needs. For example, if you’re into fitness, you might download an app that offers various workouts. However, the app may not provide the customisation you seek. Now, instead of relying on downloads, you can create your own personalised apps that cater specifically to your needs. 

“On demand software is here,” said Joshua Kelly, chief technology officer, Flexpa, a healthcare tool company. Using Artifacts, he built a simple stretching time app for his runs in just 60 seconds.

Other than just giving prompts, users can now also share previously made websites or apps, and Claude can generate an exact replica. 

“You can now take a photo of something you want to replicate, give it to AI, and it outputs the code with a preview right on your iPhone,” posted Linas Beliūnas, director of revenue at Zero Hash, on LinkedIn.

On the internet, one can find several apps built using Claude Artifacts, such as the Rubik’s Cube Simulator, Self-Playing Snake Game, Reddit Thread Analyzer, Drum Pad, and Daily Calorie Expenditure.

Apart from building apps, Artifacts has the potential to greatly impact education. “Any piece of content— whether it’s a screenshot, PDF, presentation, or something else—can now be turned into an interactive learning game,” said AI influencer Rowan Cheung.

The End of No-Code Platforms?

Claude Artifacts is going to be a big threat to no-code and low-code app builder platforms such as AppMySite, Builder.ai, Flutter, and React Native. 

“Claude Artifacts are insane — I cannot believe how good the product is. You can ask it to build most little internal tools in minutes (at least, the UI) and customize further via code. Feels like a superpower for semi-technical people,” posted a user on X. 

Moreover, Claude, when put together with Cursor AI, has simplified the process of making apps. “So I’m building this box office app in React Native and I thought I’d try Cursor with Claude 3.5 and see how far I’d get. The backend is django/psql that’s already in place,” said another user on X. “Starting from scratch, I have authenticated with my server to log in users, issue tickets, register tickets, scan ticket QR codes, and send email/sms confirmations,” he added. 

Claude is set to rapidly democratise app development, potentially eliminating the need for an App Store. It will enable anyone to build apps based on their specific needs, complete with personalised UI and UX.

Moreover, building an app for the iOS App Store is challenging. Apple charges a standard 30% commission on app sales and in-app purchases, including both paid app downloads and digital goods sold within the apps. 

The company enforces rigorous guidelines that apps must adhere to, covering aspects such as user interface design, functionality, and privacy. Many apps are rejected for minor violations, and these guidelines are frequently updated, requiring developers to stay informed and adapt quickly.

However, for now, Claude allows anyone to build anything without any charges and lets users experiment to see if something is working or not. Even if someone wants to publish an app built using Claude on the iOS App Store, that is definitely an option.

Interestingly, Apple recently announced that, for the first time, it will allow third-party app stores on iOS devices in the EU. This change enables users to download apps from sources other than Apple’s official App Store, providing more options for app distribution and potentially reducing costs for developers.

Better than ChatGPT 

OpenAI previously introduced ChatGPT plugins, enabling users to create custom GPTs for their specific tasks. However, these plugins do not compare to Artifacts, which allows users to visualise their creations. 

While the Plugins Store was billed as an ‘iOS App Store moment’, it failed to meet the expectations and ended up being a hot mess. 

Moreover, during DevDay 2023, OpenAI chief Sam Altman launched a revenue-sharing programme which was introduced to compensate the creators of custom GPTs based on user engagement with their models. 

However, many details about the revenue-sharing mechanism remain unclear, including the specific criteria for payments and how the engagement would be measured.

“It was supposed to be announced sometime in Q1 2024, but now it’s the end of March, and there are still few details about it,” posted a user on the OpenAI Developer Forum in March. There have been no updates on the matter from OpenAI since then.

The post Anthropic Claude Artifacts to Kill App Store Soon  appeared first on AIM.

]]>
Revrag Unveils Its First AI Agent Emma https://analyticsindiamag.com/ai-news-updates/revrag-unveils-its-first-ai-agent-emma/ Mon, 02 Sep 2024 11:09:10 +0000 https://analyticsindiamag.com/?p=10134251

Emma integrates with popular tools like Slack and HubSpot

The post Revrag Unveils Its First AI Agent Emma appeared first on AIM.

]]>

Revrag, a startup developing AI agents, has announced the launch of its first AI-powered solution, Emma.

Emma is designed to revolutionise the initial stages of customer outreach and lead generation. Seamlessly integrating with popular tools like Slack and HubSpot, Emma streamlines the sales process, making it faster and more efficient for teams. With Emma, sales teams can easily:

  • Set up campaigns to target potential customers.
  • Customise outreach efforts based on specific business needs.
  • Save valuable time by automating the initial contact stages.
  • Potentially reach a larger pool of leads in less time.

“The SaaS world is evolving rapidly, and we see enormous potential for Generative AI in the sales industry, a sentiment echoed by recent market trends. AI agents like Emma represent a transformative technology that’s reshaping business operations globally. We aim to fill existing gaps and lead the market towards greater innovation. For small businesses and startups, AI agents like Emma offer a unique opportunity to level the playing field,” Ashutosh Singh, CEO of Revrag, said.

Revrag’s AI agents are sophisticated systems designed to function autonomously, learn from their environment, and achieve specific goals with minimal human intervention.

Revrag’s journey has been bolstered by an impressive $600k in pre-seed funding, reflecting investor confidence in the company’s vision. As the industry eagerly anticipates the launch of Emma, Revrag is set to challenge traditional sales processes and potentially usher in a new era in enterprise sales.

Backed by prominent angel investors and a powerhouse venture, Revrag is poised to make a significant impact in the competitive SaaS landscape.

The Bengaluru-based startup has attracted the attention of industry heavyweights, including Viral Bajaria, Co-founder of 6sense, Kunal Shah, Founder of Cred, Deepal Anchala, Founder of Slintel, Vetri Vellore, Founder of Rhythms, and 20 other renowned investors.

The post Revrag Unveils Its First AI Agent Emma appeared first on AIM.

]]>
How Joule is Helping SAP Find Moat in Spend Management https://analyticsindiamag.com/intellectual-ai-discussions/how-joule-is-helping-sap-find-moat-in-spend-management/ Mon, 02 Sep 2024 10:05:48 +0000 https://analyticsindiamag.com/?p=10134239

SAP customers are now reaping the benefits of Ariba, Concur and Fieldgrass – companies which SAP has acquired over the years – under one platform.

The post How Joule is Helping SAP Find Moat in Spend Management appeared first on AIM.

]]>

In today’s fast-paced world, it is vital for organisations to optimise procurement, manage supply chains and control costs effectively. For instance, cocoa prices have surged over 400% this year, posing a significant challenge for chocolate manufacturers.

Spend management tools provide comprehensive visibility into a company’s spending patterns, streamlining procurement processes. Generative AI could further optimise spend management as it can provide predictive analytics, supplier recommendations and enhanced negotiations strategies. 

This is exactly what SAP has done. The company has consolidated all aspects of spend management under one umbrella, calling it Intelligent Spend and Business Network (ISBN). And now, it has thrown generative AI into the mix.

SAP customers are now reaping the benefits of Ariba, Concur and Fieldglass – companies which SAP has acquired over the years – under one platform coupled with SAP Business Network.

While spend management is not a new concept, clubbing together the solution powered by generative AI and Joule, a proprietary AI assistant developed internally by SAP, they believe the solution holds potential.

How GenAI is Changing Spend Management

“At SAP, our AI-first product strategy involves deeply embedding AI into core business processes. Rather than overlaying AI onto existing workflows, we are fundamentally reimagining processes, positioning AI as a constant, collaborative partner for every user,” Jeff Collier, chief revenue officer, SAP Intelligent Spend & Business Network, told AIM.

By using Ariba, a procurement tool, customers can now leverage the power of generative AI models to accelerate their planning processes across multiple categories (including 3rd party market data from Beroe) and reduce supplier onboarding time, Collier further revealed.

(Jeff Collier, chief revenue officer, SAP Intelligent Spend & Business Network)

Generative AI models are also helping Ariba customers make sense of historical data and derive insights from it and make recommendations in real-time, which in return is helping them in their procurement process. 

“To optimise external workforce and services spend for greater insight, control, and savings, SAP Fieldglass customers are now leveraging AI-enhanced SOW description generation, AI-enhanced job descriptions, and AI-enhanced translation of job descriptions,” Collier added.

AI Agents are Coming

If not LLMs, AI agents are expected to truly scale AI. When asked if these could be part of SAP’s ISBN solution, Collier said that SAP is focused on embedding digital assistants directly into our products through Joule, SAP’s generative AI copilot. 

Joule provides AI-assisted insights and will be integrated as a standard feature across the SAP portfolio, including ISBN.

“In my conversations with chief executives, business leaders, and decision-makers around the world, a common theme has emerged– the urgent need to do more with less,” he said. 

He added that the COVID-19 pandemic has significantly expanded responsibilities, yet headcounts have remained flat or declined, making productivity a critical driver for AI interest among business leaders.

“The real question now is how swiftly these leaders can embed AI into their operations to start reaping its benefits. With the proliferation of tools, complex policies, and the need to maximise return and value, organisations are seeking chat-based interfaces or agents to guide them through their tasks and answer queries efficiently,” he added.

Hence, it makes sense for SAP to integrate AI agents into its entire portfolio. With Joule, SAP users can save time and increase productivity by describing their ideas, asking analytical questions, or instructing the system, rather than navigating through traditional clicks or coding. 

ISBN, an Interesting Prospect for India 

Explaining further what ISBN solutions from SAP truly mean, Ashwani Narang, Vice President, Intelligent Spend and Business Network, SAP Indian Subcontinent, told AIM that spend management extends beyond traditional procurement to include the entire supply chain, contingent labour, and employee expenses. 

“Various departments—marketing, finance, and others—constantly request funds, highlighting the need for comprehensive oversight. Procurement used to be the sole area focused on savings, but now spend management encompasses all financial outflows, including those outside the organisation like partners, logistics providers, and consultants,” Narang said.

(Ashwani Narang, Vice President, Intelligent Spend and Business Network, SAP Indian Subcontinent)

Narang believes, as the country transitions towards being a manufacturing economy with a great consensus on producing things locally thanks to initiatives like ‘Make in India’, ISBN could be a great tool for Indian companies. 

“The more you become a manufacturing economy, the more working capital becomes important and I believe that’s where the onus is going to be,” he said.

SAP has witnessed triple-digit growth with ISBN, and according to Narang, thousands of customers in India are already leveraging the solutions.

The solutions can also help companies with category management. “For instance, if a mattress manufacturer needs cotton, it’s crucial to assess whether it’s a supplier-power or buyer-power category. Knowing if enough suppliers offer the specific grade of cotton helps in negotiating prices,” he added.  

Narang said that Joule can provide insights into market conditions and supplier dynamics, guiding strategic sourcing and ensure better negotiations, he added.

Moreover, ISBN is also not limited to large enterprises with a global footprint. The portfolio of SAP customers for ISBN includes mid-size and small enterprises as well.

“Any company with significant purchasing needs—whether it’s a relatively small firm with a turnover of INR 1,000 crore or a large corporation like Microsoft with a $50 billion valuation—can benefit from SAP’s intelligence solutions.”

The post How Joule is Helping SAP Find Moat in Spend Management appeared first on AIM.

]]>
Wake Me Up When Companies Start Hiring Clueless Modern ‘Developers’ https://analyticsindiamag.com/ai-origins-evolution/wake-me-up-when-companies-start-hiring-clueless-modern-developers/ Mon, 02 Sep 2024 09:00:42 +0000 https://analyticsindiamag.com/?p=10134230

People who know how to drive are not all F1 racers.

The post Wake Me Up When Companies Start Hiring Clueless Modern ‘Developers’ appeared first on AIM.

]]>

“Programming is no longer hard” or “everyone’s a developer” are the most common phrases that one would hear on LinkedIn or X as everyone is basically talking about Cursor, Claude, or GitHub Copilot. But the problem is that most of the people who claim so are not developers themselves. They are merely ‘modern developers.’

Register for NVIDIA AI Summit India

Santiago Valdarrama, founder of Tideily and an ML teacher, who has been actively asking developers if they are using Cursor or not, started another discussion that tools such as Cursor and others are basically tools that can only assist existing developers in writing better code. “Wake me up when companies start hiring these clueless modern ‘developers’,” he added.

He gave an analogy of calling yourself an F1 racer after playing a racing game on an iPad.

In all honesty, it is undeniable that the barrier to entry for becoming a developer has dropped significantly ever since Cursor, even ChatGPT, dropped. People have been able to build software for their personal use and even build apps in mere hours. But, this does not eliminate the fact that it is currently only limited to just creating such apps and low level software.

“You Can’t Correct Code if You Don’t Know How to Code”

Given all this hype around the end of software engineering roles, developers and programmers are getting worried about the future of their jobs. It is indeed true that software engineers have to upskill faster than anyone else, but the fear of getting replaced can be pushed off to at least a few years.

Having tools such as Cursor and Claude are only good enough if a developer actually knows how the code actually works. The real game-changer is how developers who use AI will outpace those who don’t. “The right tools can turn a good developer into a great one. It’s not about replacing talent; it’s about enhancing it,” said Eswar Bhageerath, SWE at Microsoft. 

AI only takes care of the easy part for a developer – writing code. The real skill that a developer has is reasoning and problem solving, apart from fixing the bugs in the code itself, which cannot be replaced by any AI tool, at least anytime soon. Cursor can only speed up the process and write the code but correcting the code is something that only developers can do.

Moreover, bugs generated within code with AI tools are not easily traceable by developers without using any other AI bug detection tool. Andrej Karpathy, who has been actively supporting Cursor AI over GitHub Copilot, also shared a similar thing while working. “it’s slightly too convenient to just have it do things and move on when it seems to work.” This has also led to the introduction of a few bugs when he is coding too fast and tapping through big chunks of code.

These bugs cannot be fixed by modern ‘developers’ who were famously also called ‘prompt engineers’. To put it simply, someone has to code the code for no-code software.

Speaking of prompt engineers, the future will include a lot of AI agents that would be able to write code themselves. The future jobs of software engineers would be managing a team of these AI coding agents, which is not possible for developers who just got into the field by just learning to build apps on Cursor or Claude. It is possible that the size of the teams might decrease soon as there would be no need for low level developers.

Upskilling is the Need of the Hour

That is why existing developers should focus on developing engineering skills, and not just coding skills. Eric Gregori, adjunct professor at Southern New Hampshire University, said that this is why he has been teaching his students to focus more on engineering than just programming. “AI is too powerful of a tool to ignore,” he said, while adding that existing limitations of coding platforms have been removed completely. 

“Hopefully, AI will allow software engineers to spend more time engineering and less time programming.” It is time to bring back the old way of learning how to code as modern developers would be tempted to just copy and paste code from AI tools, and not do the real thinking. 

The F1 driver analogy fits perfectly here. Most people can learn how to drive, but would never be able to become a race driver. The same is the case with the coding tools. But if all people need is prototyping and designing an initial code, AI driven developers would be able to do a decent enough job.

That is why a lot of pioneers of the AI field such as Karpathy, Yann LeCun, Francois Chollet, even Sam Altman, say that there would be 10 million coding jobs in the future, the ones that would require the skills of Python, C++, and others. As everyone in some way would be a ‘modern developer’, and most of the coding would be done by AI agents. 

It is possible that most of the coding in the future would be in English, but most of it would be about debugging and managing the code generated by AI, which is not possible for someone who does not know coding from scratch.

The post Wake Me Up When Companies Start Hiring Clueless Modern ‘Developers’ appeared first on AIM.

]]>
The Operationalisation of GenAI https://analyticsindiamag.com/ai-highlights/the-operationalisation-of-genai/ Mon, 02 Sep 2024 08:28:25 +0000 https://analyticsindiamag.com/?p=10134232

Organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

The post The Operationalisation of GenAI appeared first on AIM.

]]>

The operationalisation of GenAI is becoming significant across various industries. Vinoj Radhakrishnan and Smriti Sharma, Principal Consultants, Financial Services at Fractal, shared insights into this transformative journey, shedding light on how GenAI is being integrated into organisational frameworks, particularly in the banking sector, addressing scepticism and challenges along the way.

“GenAI has undergone an expedited evolution in the last 2 years. Organisations are moving towards reaping the benefits that GenAI can bring to their eco system, including in the banking sector. Initial scepticism surrounding confidentiality and privacy has diminished with more available information on these aspects,” said Sharma. 

She noted that many organisations are now pledging substantial investments toward GenAI, indicating a shift from conservative avoidance to careful consideration.

Radhakrishnan added, “Organisations are now more open to exploring various use cases within their internal structures, especially those in the internal operations space that are not customer facing. 

“This internal focus allows for exploring GenAI’s potential without the regulatory scrutiny and reputational risk that customer facing applications might invite. Key areas like conversational BI, knowledge management, and KYC ops are seeing substantial investment and interest.”

Challenges in Operationalisation

Operationalising GenAI involves scaling applications, which introduces complexities. “When we talk about scaling, it’s not about two or three POC use cases; it’s about numerous use cases to be set up at scale with all data pipelines in place,” Sharma explained. 

“Ensuring performance, accuracy, and reliability at scale remains a challenge. Organisations are still figuring out the best frameworks to implement these solutions effectively,” she said.

Radhakrishnan emphasised the importance of backend development, data ingestion processes, and user feedback mechanisms. 

“Operationalising GenAI at scale requires robust backend to frontend API links and contextualised responses. Moreover, adoption rates play a crucial role. If only a fraction of employees uses the new system and provide feedback, the initiative can be deemed a failure,” he said.

The Shift in Industry Perspective

The industry has seen a paradigm shift from questioning the need for GenAI to actively showing intent for agile implementations. “However,” Sharma pointed out, “only a small percentage of organisations, especially in banking, have a proper framework to measure the impact of GenAI. Defining KPIs and assessing the success of GenAI implementations remain critical yet challenging tasks.”

The landscape is evolving rapidly. From data storage to LLM updates, continuous improvements are necessary. Traditional models had a certain refresh frequency, but GenAI requires a more dynamic approach due to the ever-changing environment.

Addressing employee adoption, Radhakrishnan stated, “The fear that AI will take away jobs is largely behind us. Most organisations view GenAI as an enabler rather than a replacement. The design and engineering principles we adopt should focus on seamless integration into employees’ workflows.”

Sharma illustrates with an example, “We are encouraged to use tools like Microsoft Copilot, but the adoption depends on how seamlessly these tools integrate into our daily tasks. Employees who find them cumbersome are less likely to use them, regardless of their potential benefits.”

Data Privacy and Security

Data privacy and security are paramount in GenAI implementations, especially in sensitive sectors like banking. 

Radhakrishnan explained, “Most GenAI use cases in banks are not customer-facing, minimising the risk of exposing confidential data. However, there are stringent guardrails and updated algorithms for use cases involving sensitive information to ensure data protection.”

Radhakrishnan explained that cloud providers like Microsoft and AWS offer robust security measures. For on-premises implementations, organisations need to establish specific rules to compartmentalise data access. 

“Proprietary data also requires special handling, often involving masking or encryption before it leaves the organisation’s environment,” Sharma added.

Best Practices for Performance Monitoring

Maintaining the performance of GenAI solutions involves continuous integration and continuous deployment (CI/CD). 

“LLMOps frameworks are being developed to automate these processes,” Radhakrishnan noted. “Ensuring consistent performance and accuracy, especially in handling unstructured data, is crucial. Defining a ‘golden dataset’ for accuracy measurement, though complex, is essential.”

Sharma added that the framework for monitoring and measuring GenAI performance is still developing. Accuracy involves addressing hallucinations and ensuring data quality. Proper data management is fundamental to achieving reliable outputs.

CI/CD play a critical role in the operationalisation of GenAI solutions. “The CI/CD framework ensures that as underlying algorithms and data evolve, the models and frameworks are continuously improved and deployed,” Radhakrishnan explained. “This is vital for maintaining scalable and efficient applications.”

CI/CD frameworks help monitor performance and address anomalies promptly. As GenAI applications scale, these frameworks become increasingly important for maintaining accuracy and cost-efficiency.

Measuring ROI is Not So Easy

Measuring the ROI of GenAI implementations is complex. “ROI in GenAI is not immediately apparent,” Sharma stated. “It’s a long-term investment, similar to moving data to the cloud. The benefits, such as significant time savings and reduction in fines due to accurate information dissemination, manifest over time.”

Radhakrishnan said, “Assigning a monetary value to saved person-hours or reduced fines can provide a tangible measure of ROI. However, the true value lies in the enhanced efficiency and accuracy that GenAI brings to organisational processes.”

“We know the small wins—saving half a day here, improving efficiency there—but quantifying these benefits across organisations is challenging. At present, only a small portion of banks have even started the journey on a roadmap for that,” added Sharma.

Sharma explained that investment in GenAI is booming, but there is a paradox. “If you go to any quarterly earnings call, everybody will say we are investing X number of dollars in GenAI. Very good. But on the ground, everything is a POC (proof of concept), and everything seems successful at POC stage. The real challenge comes after that, when a successful POC needs to be deployed at production level. There are very few organisations scaling from POC to production as of now. One of the key reasons for that is uneasiness on the returns from such an exercise – taking us back to the point on ROI.”

“Operational scaling is critical,” Radhakrishnan noted. “Normally, when you do a POC, you have a good sample, and you test the solution’s value. But when it comes to operational scaling, many aspects come into play. It must be faster, more accurate, and cost-effective.” 

Deploying and scaling the solution shouldn’t involve enormous investments. The solution must be resilient, with the right infrastructure. When organisations move from POC to scalable solutions, they often face trade-offs in terms of speed, cost, and continuous maintenance.

The Human Element

Human judgement and GenAI must work in harmony. “There must be synergy between the human and what GenAI suggests. For example, in an investment scenario, despite accurate responses from GenAI, the human in the loop might disagree based on their gut feeling or client knowledge,” said Radhakrishnan.

This additional angle is valuable and needs to be incorporated into the GenAI algorithm’s context. A clash between human judgement and algorithmic suggestions can lead to breakdowns, especially in banking, where a single mistake can result in hefty fines.

Data accuracy is obviously crucial, especially for banks that rely heavily on on-premises solutions to secure customer data. 

“Data accuracy is paramount, and most banks are still on-premises to secure customer data. This creates resistance to moving to the cloud. However, open-source LLMs can be fine-tuned for on-premises use, although initial investments are higher,” added Sharma.

The trade-off is between accuracy and contextualisation. Fine-tuning open-source models is often better than relying solely on larger, generic models.

Radhakrishnan and Sharma both noted that the future of GenAI in banking is moving towards a multi-LLM setup and small language models. “We are moving towards a multi-LLM setup where no one wants to depend on a single LLM for cost-effectiveness and accuracy,” said Sharma. 

Another trend she predicted is the development of small language models specific to domains like banking, which handle nuances and jargon better than generalised models.

Moreover, increased regulatory scrutiny is on the horizon. “There’s going to be a lot more regulatory scrutiny, if not outright regulation, on GenAI,” predicted Radhakrishnan.

“Additionally, organisations currently implementing GenAI will soon need to start showing returns. There’s no clear KPI to measure GenAI’s impact yet, but this will become crucial,” he added.

“All the cool stuff, especially in AI, will only remain cool if the data is sound. The more GenAI gathers steam, the more data tends to lose attention,” said Sharma, adding that data is the foundation, and without fixing it, no benefits from GenAI can be realised. “Banks, with their fragmented data, need to consolidate and reign-in this space to reap any benefits from GenAI,” she concluded.

The post The Operationalisation of GenAI appeared first on AIM.

]]>
10 Insane Videos by Luma’s Dream Machine 1.5  https://analyticsindiamag.com/ai-mysteries/10-insane-videos-by-lumas-dream-machine-1-5/ Mon, 02 Sep 2024 07:07:31 +0000 https://analyticsindiamag.com/?p=10134216

Dream Machine just got revamped! 

The post 10 Insane Videos by Luma’s Dream Machine 1.5  appeared first on AIM.

]]>

Luma recently released an upgraded version of its innovative video generator, Dream Machine 1.5. This update enhances realism and expands creative possibilities, providing users with more advanced tools and features.

https://twitter.com/LumaLabsAI/status/1825639918539817101

This new image and video generator is capable of generating high-quality videos from simple text descriptions and tends to be more photo-realistic, which could make it more suitable for certain use cases.

Dream Machine 1.5 claims to be faster than competitors Sora and Kling, making it efficient for experimenting with different prompts and ideas. 

Co-founded in 2021 by CEO Amit Jain, Luma AI is currently based in San Francisco, California.

A prime contributor to Luma’s success is AWS. Amazon’s cloud computing subsidiary has provided Luma AI with the infrastructure, exposure and practical applications, showcasing its capabilities in streamlining production processes.

“Great to see how AWS H100 training infrastructure has helped the Luma AI team reduce the time to train foundation models and support the launch of Dream Machine,” said Swami Sivasubramanian, the vice president for data and machine learning services, AWS.

AIM decided to try out Dream Machine to produce a video. Here’s a look at it.

Check out the model here.

In the meantime, we’ve compiled a list of the 10 most astonishing videos created by Dream Machine.

Lifetime

This AI-generated video presents a captivating and emotional narrative following the life of a woman from childhood to old age. It captures the essence of the human experience by showcasing the various stages of life, from the innocence of youth to the wisdom of old age.

The AI-generated visuals delicately nudge everyone to experiment with AI-powered video generation for free on its website. With this, Luma AI has hit a major milestone in the field.

AI Fashion Show

The video features a surreal fashion show created by Dream Machine 1.5. It showcases innovative designs and concepts, blending traditional fashion elements with AI-generated aesthetics on the runway. It also highlights avant-garde fashion.

The video highlights the advanced visual capabilities of AI in capturing and rendering detailed environments.

Mystic

Here, the video features a cyclist riding through a picturesque landscape with stunning lighting that is almost indistinguishable from reality, capturing vibrant colours and a beautiful sunset. This generated video demonstrates body reconstruction, backed by the model’s new technology, which allows users to create videos in various aspect ratios.

AI Tesla Commercial

The AI-generated video showcases a striking red Tesla car driving through a futuristic cityscape and mountains. The scene is rendered with hyper-realistic detail, from the sleek design of the car to the vibrant reflections on its surface. It also captures dynamic lighting and shadows, making the video feel almost lifelike. 

The Tesla smoothly navigates through neon-lit streets, highlighting the capabilities of Dream Machine 1.5 in producing cinematic-quality visuals. This visual experience shows the original artwork while offering a fresh, modern perspective through the unsettling potential of AI.

First-person video

Here, the video shows a man walking through a dense forest, holding a gun, with the camera capturing the scene from behind in a first-person perspective. The forest is rendered with stunning realism, from the intricate details of the vegetation to the dappled sunlight filtering through the trees. 

His movements are smooth and natural, enhancing the immersive quality of the experience. Tension builds as the camera follows closely, emphasising the man’s cautious steps.

Dynamic

The next one has multiple women riding bikes through various locations, with intense warfare unfolding in the background. Each scene is meticulously crafted, showcasing them navigating streets, rugged terrains, and desolate landscapes. 

The AI captures the dynamic motion of the bikes and the dramatic lighting from the warfare, enhancing the video’s realism. This demonstration highlights Dream Machine 1.5’s ability to blend action and atmosphere in complex scenarios.

Driving Through 

The AI-generated video captures a car racing through a small city in the northwest US during a snowstorm. It is depicted with stunning clarity, showcasing the car’s rapid movement against the backdrop. Reflections in the windshield add a dynamic layer, capturing the swirling snowflakes and shimmering city lights. 

The video enhances the sensation of hyperspeed, making the car appear as if it’s slicing through the winter landscape with extraordinary precision. 

Cinematic

This Dream Machine 1.5  video portrays a warrior leading his men through a dense, shadowy forest. The scene is rich with atmosphere, as the group moves cautiously, the warrior’s determined expression reflecting his focus and resolve. The forest is alive with detail, from the rustling leaves to the light piercing through the canopy. 

The men follow closely, their armour and weapons gleaming subtly in the dim light. The video captures the tension and camaraderie of the group, showcasing Dream Machine 1.5’s ability to create dramatic and immersive historical scenes.

The AI algorithms show the light refraction, intense colour, and zoom, creating a high seemingly realistic and captivating scene. 

Food

The video is set against the backdrop of an array of vibrant food items, each rendered with mouthwatering detail. From sizzling omelettes, pizzas, pancakes and many more, the video captures the textures, colours, and steam rising from the food, making it look almost lifelike. 

The camera moves smoothly, highlighting the richness and diversity of the spread, from gourmet creations to comfort foods. The model’s ability to create realistic images from text descriptions is so skilled, and showcases its potential to revolutionise the way images are created.

Cinematography

In this video, a solitary man is depicted walking through the streets of a city. The scene looks incredibly realistic, with shadows stretching across the abandoned buildings. As the man walks, the camera focuses on him, and then in a smooth, seamless transition, the shot shifts from his shoes to his face, revealing his expression.

The post 10 Insane Videos by Luma’s Dream Machine 1.5  appeared first on AIM.

]]>
Python Can’t Get Package Management Right https://analyticsindiamag.com/developers-corner/python-cant-get-package-management-right/ Mon, 02 Sep 2024 05:41:19 +0000 https://analyticsindiamag.com/?p=10134206

Python cannot handle two different versions of the same package which leads to “dependency hell”, causing entire installations to fail.

The post Python Can’t Get Package Management Right appeared first on AIM.

]]>

The struggle is real. When a developer uses multiple package managers, there’s a risk of modules being overwritten or conflicting. Nearly 4.71% of packages on PyPI have module conflicts in their dependency graphs, leading to broken environments or even security risks.

Often, different package managers use different lock file formats, which can cause issues when switching between tools or collaborating with others using different package managers. In Python, things can get much worse when you consider dependency management.  

In the Python world there are multiple package managers including pip, Conda, Poetry, pipenv, pyenv which seem to have their own flaws. 

Why it matters? This makes it confusing for both new as well experienced developers, and eventually things feel unreasonably slow. Most users try to solve this by replicating other’s environment without giving it a second thought, and that does not work either.

But, Python is deaf for dependency resolution 

One of the primary issues in Python dependency management is handling conflicting dependencies

For instance, pip, the default package manager, cannot handle two different versions of the same package. This situation, often touted as “dependency hell”, causes entire installations to fail, leading to unexpected behaviour in projects.

A few months ago, one of the Reddit users mentioned that Python really feels like a programming environment from the early 80s, where a developer had a single project on their pc, and that was all they worked on for years. 

“Python wants to do everything on the global system level, including runtime versioning and packages. That means that any two developers can think they have a working project on their system, even though they have radically different setups. This makes handing off and deploying Python applications a nightmare,” he added further, suggesting why dependency resolution is a nightmare on Python. 

However, the most important and weird part of the dependency resolution is that pip makes assumptions. The pip documentation on dependency resolution explains that pip makes assumptions about package versions during installation and later checks these assumptions, which can lead to conflicts if the assumptions are incorrect.

Managing dependencies can be resource-heavy. One user reported having about 100+ GB of their hard drive filled with Python virtual dependencies, highlighting the storage impact of multiple environments.

Ergo, Virtual Environments

“I’m afraid of having 2000 folders, each one with a different virtual environment,” said one Reddit user expressing confusion about virtual environments. Running a project solely or in isolation becomes cumbersome. 

While virtual environments are essential for project isolation and dependency management, there are instances where users find virtual environments problematic rather than solving the problem. 

Previously, users have reported that package versions and dependencies can still conflict within virtual environments, requiring manual resolution in some cases that directly question the isolation in Python. 

Some developers view virtual environments as wasteful, believing they unnecessarily duplicate libraries for each project. As one Reddit user stated, “It seems like you’re installing a new copy of every library every time you start a new project, which seems like a waste of resources.”

The complexity of virtual environments can be overwhelming for those new to Python. A Reddit user expressed extreme frustration, saying, “I spend way more time just trying my computer to get my virtual environment up, project dependencies installed, and IDE configured than I do actually coding.”

Several developers recommend using Docker to avoid virtual environment issues altogether. This approach encapsulates the entire environment, making it more reproducible across different systems.

The post Python Can’t Get Package Management Right appeared first on AIM.

]]>
Top 10 Dying Programming Languages That Vanished Over Time https://analyticsindiamag.com/ai-mysteries/10-dead-programming-languages/ Mon, 02 Sep 2024 04:25:42 +0000 https://analyticsindiamag.com/?p=10096843

Programming languages have come a long way since Autocode in complexity of tasks that they can accomplish

The post Top 10 Dying Programming Languages That Vanished Over Time appeared first on AIM.

]]>

Programming languages are constantly evolving with a life cycle that entails: popularity, growth and decline. The reasons behind their decline vary from outdated principles to new more efficient languages gaining popularity. Here are 10 languages that once enjoyed popularity in their prime but were lost into oblivion in the 21st century. 

COBOL

In 1960, the CODASYL organisation played a significant role in the development of COBOL, a programming language influenced by the division between business and scientific computing. During that time, high-level languages in the industry were either used for engineering calculations or data management. COBOL, considered one of the four foundational programming languages along with ALGOL, FORTRAN, and LISP, was once the most widely used language worldwide. It continues to operate many of our legacy business systems.

Cause of Death: Two factors contributed to COBOL’s decline. Firstly, it had minimal connections with other programming language efforts. Very few developers built upon COBOL, leading to the scarcity of its influence in second or third generation languages, which benefited from lessons learned from their predecessors. COBOL is exceptionally intricate, even by today’s standards. Consequently, COBOL compilers fell behind those of contemporaneous microcomputers and minicomputers, providing opportunities for other languages to thrive and eventually surpass itself.

ALGOL

In 1960, the ALGOL committee aimed to create a language for algorithm research, with ALGOL-58 preceding and quickly being replaced by ALGOL-60. Despite being relatively lesser known today compared to LISP, COBOL, and FORTRAN, ALGOL holds significant importance, second only to LISP, among the four original programming languages. It contributed to lexical scoping, structured programming, nested functions, formal language specifications, call-by-name semantics, BNF grammars, and block comments.

Cause of Death: ALGOL was primarily a research language, not intended for commercial use. Its specification lacked input/output capabilities, making practical application difficult. As a result, numerous ALGOL-like languages emerged in the 1960s and 1970s. Subsequent languages were based on these extensions rather than ALGOL itself. During the 1960s and 1970s, numerous ALGOL-like languages emerged as people extended ALGOL with input/output capabilities and additional data structures. Examples of such languages include JOVIAL, SIMULA, CLU, and CPL. The descendants of ALGOL ultimately overshadowed and outpaced it in popularity and usage.

APL

APL was created by Ken Iverson in 1962. Originally developed as a hand-written notation for array mathematics, IBM adopted it as a programming language. APL focused on array processing, enabling concise manipulation of large blocks of numbers. It gained popularity on mainframe computers due to its ability to run with minimal memory requirements.

APL revolutionised array processing by introducing the concept of operating on entire arrays at once. Its influence extends to modern data science and related fields, with its innovations inspiring the development of languages like R, NumPy, pandas, and Matlab. APL also has direct descendants such as J, Dyalog, K, and Q, which, although less successful, still find extensive use in the finance sector.

Cause of Death: APL faced challenges due to keyboard limitations. The language’s non-ASCII symbols made it difficult for widespread adoption. Ken Iverson addressed this issue in 1990 with J, which utilised digraphs instead of distinct symbols. However, this change came relatively late and did not gain significant traction in popularising a radically different programming style. Another challenge was APL’s limitation to homogeneous data, as it did not support storing both strings and numbers in the same data structure. Working with strings was also cumbersome in APL. These limitations, including the absence of dataframes, hindered APL’s suitability for modern data science applications.

BASIC

Created by John Kemeny in 1964, BASIC originated as a simplified FORTRAN-like language intended to make computer programming accessible to non-engineering individuals. BASIC could be compactly compiled into as little as 2 kilobytes of memory and became the lingua franca for early-stage programmers. It was commonly used by individuals programming at home in the 1970s.

Its major technical impact lay in its runtime interpretation. It was the first language to feature a real-time interpreter, beating APL by a year. 

Cause of Death: BASIC faced the perception of being a “lesser” language compared to other programming languages used by professional programmers. While it continued to be used by children and small business owners, it was not considered the language of choice for experienced programmers. As microcomputers with larger memory capacities became available, BASIC was gradually replaced by languages like Pascal and C. BASIC persisted for some time as a legacy teaching language for kids but eventually faded away from that niche as well.

PL/I

Developed by IBM in 1966, PL/I aimed to create a language suitable for both engineering and business purposes. IBM’s business was previously divided between FORTRAN for scientists and COMTRAN for business users. PL/I merged the features of these two languages, resulting in a language that supported a wide range of applications.

PL/I implemented structured data as a type, which was a novel concept at the time. It was the first high-level language to incorporate pointers for direct memory manipulation, constants, and function overloading. Many of these ideas influenced subsequent programming languages, including C, which borrowed from both BCPL and PL/I. Notably, PL/I’s comment syntax is also used in C.

Cause of Death: PL/I faced challenges as it tried to straddle the line between FORTRAN and COBOL. Many FORTRAN programmers considered it too similar to COBOL, while COBOL programmers saw it as too similar to FORTRAN. IBM’s attempt to compete with two established languages using a more complex language deterred wider adoption. Moreover, IBM held the sole compiler for PL/I, leading to mistrust from potential users concerned about vendor lock-in. By the time IBM addressed these issues, the computing world had already transitioned to the microcomputer era, where BASIC outpaced PL/I.

SIMULA 67

Ole Dahl and Kristen Nygaard developed SIMULA 67 in 1967 as an extension of ALGOL for simulations. SIMULA 67, although not the first object-oriented programming (OOP) language, introduced proper objects and laid the groundwork for future developments. It popularised concepts such as class/object separation, subclassing, virtual methods, and protected attributes. 

Cause of Death: SIMULA faced performance challenges, being too slow for large-scale use. Its speed was particularly limited to mainframe computers, posing difficulties for broader adoption. It’s worth noting that Smalltalk-80, which extended SIMULA’s ideas further, had the advantage of Moore’s Law advancements over the extra 13 years. Even Smalltalk was often criticised for its speed. As a result, the ideas from SIMULA were integrated into faster and simpler languages by other developers, and those languages gained wider popularity.

Pascal

Niklaus Wirth created Pascal in 1970 to capture the essence of ALGOL-60 after ALGOL-68 became too complex. Pascal gained prominence as an introductory language in computer science and became the second most popular language on Usenet job boards in the early 1980s. 

Pascal popularised ALGOL syntax outside academia, leading to ALGOL’s assignment syntax, “:=”, being called “Pascal style”. 

Cause of Death: The decline of Pascal is complex and does not have a clear-cut explanation like some other languages. While some attribute its decline to Edsger Dijkstra’s essay ‘Why Pascal is not my favourite language’, this explanation oversimplifies the situation. Pascal did face competition from languages like C, but it managed to hold its own for a significant period. It’s worth noting that Delphi, a variant of Pascal, still ranks well in TIOBE and PYPA measurements, indicating that it continues to exist in certain niches.

CLU

CLU was developed by Barbara Liskov in 1975, with the primary intention of exploring abstract data types. Despite being relatively unknown, CLU is one of the most influential languages in terms of ideas and concepts. CLU introduced several concepts that are widely used today, including iterators, abstract data types, generics, and checked exceptions. Although these ideas might not be directly attributed to CLU due to differences in terminology, their origin can be traced back to CLU’s influence. Many subsequent language specifications referenced CLU in their development.

Cause of Death: CLU served as a demonstration language with Liskov’s primary goal being the adoption of her ideas rather than the language itself. This objective was largely achieved, as nearly all modern programming languages incorporate elements inspired by CLU. 

SML

Robin Milner developed ML in 1976 while working on the LCF Prover, one of the first proof assistants. Initially designed as a metalanguage for writing proofs in a sound mathematical format, ML eventually evolved into a standalone programming language. 

It is considered one of the oldest “algebraic programming languages”. ML’s most notable innovation was type inference, allowing the compiler to deduce types automatically, freeing programmers from explicitly specifying them. This advancement paved the way for the adoption of typed functional programming in real-world applications.

Cause of Death: ML initially served as a specialised language for theorem provers, limiting its broader usage. While SML emerged in the same year as Haskell, which exemplified a more “pure” typed functional programming language, the wider programming community paid more attention to Haskell. ML’s impact and adoption remained substantial within academic and research settings but did not achieve the same level of popularity as some other languages.

Smalltalk

Smalltalk, developed by Alan Kay, had multiple versions released over time. Each version built upon the previous one, with Smalltalk-80 being the most widely adopted and influential. It is often regarded as the language that popularised the concept of object-oriented programming (OOP). While not the first language with objects, Smalltalk was the first language where everything, including booleans, was treated as an object. Its influence can be seen in the design of subsequent OOP languages, such as Java and Python.

Read on: There’s No Point in Learning 10 Programming Languages

Cause of Death: Smalltalk faced challenges related to interoperability with other tools and runtime performance. Its difficulty in integrating with existing systems and relatively slower execution speed hindered broader adoption. Its decline can be attributed to the emergence of Java, which had a more seamless interop with existing systems and gained overwhelming popularity. The legacy of Smalltalk lives on in the principles and design patterns that have become integral to modern software development.

The post Top 10 Dying Programming Languages That Vanished Over Time appeared first on AIM.

]]>
US Leads AI Safety with OpenAI, Anthropic Joining National AI Institute https://analyticsindiamag.com/ai-origins-evolution/us-leads-ai-safety-with-openai-anthropic-joining-national-ai-institute/ Sun, 01 Sep 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10134189

It’s high time India also considered establishing an AI Safety Institute, akin to those in the UK and US, to responsibly manage the rapid growth of AI.

The post US Leads AI Safety with OpenAI, Anthropic Joining National AI Institute appeared first on AIM.

]]>

Last week, in a one of a kind effort, Open AI signed an MOU with the US Artificial Intelligence Safety Institute (US AISI), part of the larger US Department of Commerce’s National Institute of Standards and Technology. In this juncture of AI’s revolution,  this collaboration is aimed at furthering OpenAI’s commitment for safety, transparency and human centric innovation – by building a framework that the world can contribute to. This would enable the US AI Safety Institute to get early access to test and evaluate future models prior to its public release. Anthropic has also agreed to sign this partnership. 

Sam Altman, CEO of OpenAI, took to X (formerly Twitter), to underscore the significance of this partnership. “We are happy to have reached an agreement with the US AI Safety Institute for pre-rlease testing of our future models,” said Altman, saying that this is important, and suggested for this to happen at a national level. “US needs to continue to lead!”

But Why US AI Safety Institute? 

Elizabeth Kelly, director of the US AI Safety Institute, has been a strong proponent of safety in AI innovation and has brokered many such strategic partnerships in the past. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” she said in a statement. 

The US AI Safety Institute was born in 2023, under the Biden-Harris administration, to help develop testing guidelines for safe AI innovation in the US. 

“Safety promotes trust, which promotes adoption, which drives innovation, and that’s what we are trying to promote at the US AI Safety Institute.” she said in another interview, highlighting the role of this institute in the coming future. 

Through initiatives like this, the US could lead the way for more voluntary AI safety adoption. Anthropic, OpenAI rival, has previously collaborated with government bodies – like the UK’s Artificial Intelligence Safety Institute (AISI) – to conduct pre-deployment testing of their models. It would be interesting to see if OpenAI also partners with them too.  

Why it matters? 

The need for the US AI Safety Institute (US AISI) arises from concerns about the impact of poorly managed AI systems on democracy, as highlighted by Dario Amodei, CEO of Anthropic. 

Amodei said that AI must be aligned with human values and ethics to support democratic institutions effectively. The collaboration between Anthropic, OpenAI, and the US AISI is a response to the growing power of AI, which, if left unchecked, could exceed that of national governments and economies. This partnership aims to establish safety standards, conduct pre-deployment testing, and regulate AI to prevent misuse, particularly in politically sensitive areas such as elections.

“I think it’s just really important that we provide these services well. It makes democracy as a whole more effective, and if we provide them poorly, it undermines the notion of democracy,” said Amodei. 

US vs China vs India 

The US’s push for AI safety leadership with OpenAI and Anthropic aims to counter China’s rapid AI advancements and maintain global dominance in ethical AI governance. 

At the same time, there are concerns around China winning the AI race due to its efficient state control, outpacing US efforts hindered by political gridlock. “China is probably going to win the AI game, as their state control is much more efficient than corrupt US politicians,” said Pratik Desai, saying that he wants US and freedom to win. “I just don’t trust the current bunch of politicians.” 

China’s dominance in several AI and technological is quite evident with them leading in “visual AI and LLM field as they have the best state-operated surveillance system,” added Desai. 

On the bright side, standardised scrutiny could promote a more democratic approach to developing models, which is perhaps lacking in economies like China. More countries are slowly realising the importance of AI institutes; and the need for investing in AI safety as much as they do on AI development. 

It’s high time India also considered establishing an AI Safety Institute, akin to those in the UK and US, to responsibly manage the rapid growth of AI technologies. “We need an AI Safety Institute here and now” said former director-general of CSIR, Raghunath Mashelkar, in order to maximise the benefits and minimise the risks associated with AI in the world’s most populous nation.

Former Union Minister of India, Rajeev Chandrasekhar, also underscored the critical need for democracies and their allied nations to shape the future of technology, particularly in light of concerns raised by Paul Buchheit, creator of Gmail, about the potential dangers of AI development led by China, which could lead to global surveillance and censorship. 
“It’s extremely important — more than critical — that the future of tech is shaped by democracies and their partner countries,” said Chandrasekhar.

The post US Leads AI Safety with OpenAI, Anthropic Joining National AI Institute appeared first on AIM.

]]>
NVIDIA, Apple Blow OpenAI’s Bubble https://analyticsindiamag.com/ai-origins-evolution/nvidia-apple-blow-openais-bubble/ Sat, 31 Aug 2024 08:41:50 +0000 https://analyticsindiamag.com/?p=10134186

Following the investment, the money will ultimately flow back to NVIDIA as OpenAI purchases more compute resources to train its next frontier model. 

The post NVIDIA, Apple Blow OpenAI’s Bubble appeared first on AIM.

]]>

NVIDIA is reportedly in discussions to join a funding round for OpenAI that could value the AI startup at more than $100 billion, according to the Wall Street Journal. In May 2024, OpenAI was valued at approximately $86 billion.

This news comes on the heels of NVIDIA’s impressive July quarter results, where revenues surpassed $30 billion—a 122% increase from the previous year. 

Besides NVIDIA, Apple and Microsoft are also considering participating in the financing. Thrive Capital is reportedly leading the round with a $1 billion investment, while NVIDIA is evaluating a potential contribution of around $100 million, added the report. Notably, Microsoft has invested $13 billion in OpenAI overall.

While OpenAI is dependent on the NVIDIA GPUs to train its upcoming frontier model, Apple recently partnered with the company, integrating ChatGPT into Siri

On Thursday, it was reported that ChatGPT has surpassed 200 million weekly active users, doubling its count from the previous year. 

Surprisingly, this year, OpenAI has released only GPT-40 and GPT-40 Mini. However, the company has announced several other products, including Sora, SearchGPT, Voice Engine, GPT-40 Voice, and most recently, Strawberry and Orion. It seems that the announcements were likely intended to generate hype and raise funds.

NVIDIA is Investing in Itself, Not OpenAI 

Following the investment, the money will ultimately flow back to NVIDIA as OpenAI purchases more compute resources to train its next frontier model. 

NVIDIA is keen to secure its ecosystem for the year ahead and is now concentrating on its Blackwell GPUs. This lineup includes models B100 and B200, built for data centres and AI applications.

NVIDIA chief Jensen Huang said that the Blackwell is expected to come out by the fourth quarter this year. “We’re sampling functional samples of Blackwell, Grace Blackwell, and a variety of system configurations as we speak. There are something like 100 different types of Blackwell-based systems that were shown at Computex, and we’re enabling our ecosystem to start sampling those,” said Huang. 

However, previous reports indicated that these could be delayed by three months or more for Blackwell due to design flaws, a setback that could affect customers such as Meta Platforms, Google, and Microsoft, which have collectively ordered tens of billions of dollars’ worth of these chips.

Huang believes this is just the beginning and that there’s much more to come in generative AI. “Chatbots, coding AIs, and image generators are growing rapidly, but they’re just the tip of the iceberg. Internet services are deploying generative AI for large-scale recommenders, ad targeting, and search systems,” he said.

According to the NVIDIA CFO Colette Kress the next-generation models will require 10 to 20 times more compute to train with significantly more data. 

Earlier this year, Huang personally hand-delivered the first NVIDIA DGX H200 to OpenAI. 

OpenAI’s GPT-40 voice features, demonstrated during the Spring Update event, were made possible with the help of NVIDIA H200. “I just want to thank the incredible OpenAI team, and a special thanks to Jensen and the NVIDIA team for bringing us the advanced GPU that made this demo possible today,” said OpenAI CTO Mira Murati, during the OpenAI’s Spring Update.

Apple Wants a Slice of OpenAI 

Apple is catching up in the AI race. The company recently released iOS 18.1 beta 3, introducing the AI-powered Clean Up tool under Apple Intelligence, which removes unwanted objects from photos to enhance image quality. 

This feature of Apple Intelligence is based on the 3 billion parameter model, which Apple developed recently. 

While Apple Intelligence is perfect for day to day tasks, it is not focusing on better reasoning capabilities which will be required in the near future. This is where OpenAI comes into the picture. 

“This is a sign that Apple is not seeing a path where it makes sense to build a competitive, full feature LLM,” said Gene Munster, Managing Partner, Deepwater Asset Management on Apple’s investment in OpenAI. 

He added that this means Apple will be reliant on OpenAI, Google, and possibly even Meta to deliver about a third of their AI features in the long term.

OpenAI chief Altman is a huge fan of Apple, and his startup eventually ended up partnering with the company. He recently lauded the Cupertino-based tech giant for its technology prowess, saying, “iPhone is the greatest piece of technology humanity has ever made”, and it’s tough to get beyond it as “the bar is quite high.” 

As a part of Apple-OpenAI partnership,  iOS, iPadOS, and macOS users would get access to ChatGPT powered by GPT-4o later this year, where users can access it for free without creating an account, and ChatGPT subscribers can connect their accounts and access paid features right from these experiences.

Interestingly, when OpenAI announced the ChatGPT desktop app, it was first released for Mac users rather than for Microsoft.

Moreover, it was said that the company wasn’t paying OpenAI anything, as it was doing the startup a favour by making ChatGPT available to billions of customers. 

However, investing in OpenAI today would be a smart move for Apple, as it would provide access to the latest OpenAI models, similar to how Microsoft’s AI services primarily rely on OpenAI.

Meanwhile, OpenAI definitely has a soft corner for Apple. This affinity was clearly displayed at the OpenAI Spring Update, where MacBooks and iPhones were prominently used, while Microsoft Windows products were notably absent. 

The post NVIDIA, Apple Blow OpenAI’s Bubble appeared first on AIM.

]]>
Software Engineers Have to Upskill Faster Than Anyone Else https://analyticsindiamag.com/ai-origins-evolution/software-engineers-have-to-upskill-faster-than-anyone-else/ Sat, 31 Aug 2024 04:30:00 +0000 https://analyticsindiamag.com/?p=10134157

But “upskill to what?” is what people ask.

The post Software Engineers Have to Upskill Faster Than Anyone Else appeared first on AIM.

]]>

The barrier for entry to become a developer is dropping everyday. The most recent phenomenon that everyone is still talking about is Anysphere’s Cursor AI coding tool, which has basically made everyone a developer. Now, there are more tools coming up in the same category such as Codeium, Magic, and Zed AI, all of them trying to come up with the same formulae. 

This definitely brings in the question – what would happen to the software developers of today? Graduating out of colleges with computer science degrees in a world to compete with the people who are becoming software engineers with AI tools, the turmoil of an average software engineer is real.

The solution is easier said than done – upskill yourselves and focus on higher order things such as building foundational AI. Even 8-year-olds are building apps using Cursor AI in 45 minutes. 

A Profession Like Never Before

Since there is no barrier to entry, no degree requirements, and no regulations about who can join the market, software engineering has become a profession that has never happened ever in the history. There are plenty of opportunities for developers to upskill.

But “upskill to what?” is what people ask. 

The conversation from LLMs to SLMs to coding assistants to AI agents keeps changing so swiftly, it can be challenging to determine which new skills are worth acquiring. This question reflects a broader uncertainty about how to prioritise learning in a field where the next big thing seems just around the corner.

Saket Agrawal, a developer from IIT Guwahati, said that it is not as much about the technological shift but the advancement of automation tools that reduce the time and efforts for the same skills. “I don’t see any big threat to existing software skills suddenly and software has been the field all the time which needs continuous skills updation based on requirement without leaving your old skills instantly,” he said. 

Another user on X put it in a funny way. “Software engineers need more updates than my grandma’s Windows 95. Ever tried explaining AI to her? It’s like defining gluten-free bread to a caveman!”

It is widely discussed that a lot of software engineering jobs are dying. “Winter is coming for software engineering,” said Debarghya Das from Menlo Ventures, saying that many of the current software engineering jobs would become a distant memory. 

Scott Stouffer adds another layer to this conversation by suggesting that some are experiencing an upgrade in their lives at a pace that surpasses others. This notion of being “upgraded” faster could imply a divide between those who adapt quickly to technological advancements and those who struggle to keep up.

LLMs to Upskill?

While there is a very interesting caveat to all of this conversation around upskilling. Hardcore skilled developers believe that leveraging tools such as Cursor and others can take them to another level where the new developers would never be able to reach. Yann LeCun has already told developers getting into the AI field to not work on LLMs.

Andrej Karpathy recently said that the future of coding is ‘tab tab tab’ referring to auto code completion tools such as Cursor. Further in the thread, he added that with the capabilities of LLM shifting so rapidly, it is important for developers to continually adapt the current capabilities.

Some people are sceptical if they even should get into the computer science field anymore. “…if I was new to programming I would be too tempted to skip actual learning in favour of more LLM usage, resulting in many knowledge gaps,” said a user replying to Karpathy. This truly feels like the true way forward for many developers. 

This is similar to what Francois Chollet, the creator of Keras, said a few months ago. “There will be more software engineers (the kind that write code, e.g. Python, C or JavaScript code) in five years than there are today.” He added that the estimated number of professional software engineers today is 26 million, which would jump to 30-35 million in five years.

This is because developers who are proficient with coding without code generators can never be replaced. People who built programming languages and foundational tools are still very well versed with coding than people who are just using Cursor to build apps. Sure, there can be an abundance of people building apps in the future, but the scope would just be limited to that.

Meanwhile, highly skilled 10x developers would be focusing on leveraging such tools, or possibly finding flaws in them, to create even better software. So to say, creating the next Cursor or ChatGPT.

There is an abundance of things that can be done. For instance, focusing on enhancing hardware or building infrastructure for running future workloads can only be comprehended by experts in the field. For example, companies such as Pipeshift AI, Groq, Jarvis Labs, and many others who are working on different problems than coding. 

The truth is that such AI tools can never replace human intelligence or jobs, only augment them. “Generating working code is only a part of the responsibility,” said VJ’s Insights on a post on X. Though “Yes, if you are someone who *just* writes code, you need to start thinking differently.”

In the near future, there are predictions that the future of software engineering would be about managing a team of AI agent engineers, and telling them how to code. This will make every engineer akin to an engineering manager, delegating basic tasks to coding agents while focusing on higher-level aspects such as understanding requirements, architecting systems, and deciding what to build.

It is high time that software engineers start upskilling themselves, and currently, it looks like using generative AI tools is the best way forward, not without them. Who knows, you might also become a solo-entrepreneur building a billion dollar company alone

The post Software Engineers Have to Upskill Faster Than Anyone Else appeared first on AIM.

]]>
Reliance Jio Free 100 GB AI Cloud Storage to Take on Google Cloud https://analyticsindiamag.com/ai-breakthroughs/reliance-knows-how-to-scale-ai-in-india/ Fri, 30 Aug 2024 13:32:02 +0000 https://analyticsindiamag.com/?p=10134171

Reliance Jio’s massive user base is one of the most valuable assets in its AI journey.

The post Reliance Jio Free 100 GB AI Cloud Storage to Take on Google Cloud appeared first on AIM.

]]>

A ‘Jio moment’ in AI was long overdue – and Mukesh Ambani has finally delivered it! At its 47th Annual General Meeting (AGM), Reliance Industries unveiled a series of AI initiatives. 

Major highlights included the introduction of Jio Brain, Jio AI-Cloud, Jio Phone Call AI, and the vision for a national AI infrastructure. 

Ambani, the RIL chairman and managing director, emphasised AI’s central role in the company’s future, showcasing innovations like JioBrain, an AI service platform that will enhance operations across all Reliance enterprises. 

Jio also introduced an AI-powered feature designed to transcribe, summarise, and translate phone conversations in real time. 

To support these and other AI-driven initiatives, Reliance announced the establishment of gigawatt-scale AI data centres in Jamnagar, Gujarat, powered by green energy. These centres will be part of a broader plan to create AI inference facilities nationwide, ready to meet India’s growing demand for AI capabilities.

Reliance Jio’s massive user base is one of the most valuable assets in its AI journey. Jio has over 490 million customers, each consuming an average of 30 GB of data monthly. This contributes to Jio carrying a staggering 8% of global data traffic.

“Thanks to Jio, India is now the world’s largest data market,” said Ambani.

The immense data pool provides Jio with unparalleled insights and the ability to continuously refine and scale its AI solutions, ensuring they are both effective and widely applicable.

How Reliance Jio is Scaling AI in India

Unlike many AI-focused companies such as OpenAI, which primarily cater to B2B markets, Jio is targeting the consumer market with its AI products and services. This is a strategy also adopted by Meta. The company made AI available on all its social media platforms and WhatsApp, taking AI directly to consumers.

Leveraging its deep understanding of consumer needs across sectors like healthcare, telecommunications, manufacturing, and customer services, Reliance is tailoring AI solutions that directly impact the working environment of its customers. 

This B2C approach sets Jio apart from global competitors, positioning it as a leader in consumer-centric AI innovations. With Jio AI, Ambani is bringing AI directly to its 490 million Jio customers.

Jio 100 GB Free AI Cloud Storage to Take On Google

Interestingly, Jio’s entry into the AI cloud market is a direct challenge to established players like Google Cloud. As part of its AI strategy, Jio introduced the Jio AI-Cloud Welcome Offer, which provides up to 100 GB of free cloud storage for Jio users starting this Diwali. 

In contrast, Google Cloud currently offers its basic plan of 100 GB for INR 130 per month, with more extensive plans like 200 GB at INR 210 per month and 2 TB at INR 650 per month. They even offer an AI Premium plan with 2 TB of storage and access to advanced AI models like Gemini Advanced. 

Jio’s free cloud offering is a strong counter to these paid plans, positioning it as a cost-effective alternative in the market. This aggressive pricing strategy is a replica of Jio’s launch in the telecom sector, where free services were initially offered to attract a massive user base. 

When Jio launched its 4G services, it offered free data and calls, which quickly attracted millions of users. This approach helped Jio capture over 100 million mobile subscribers within just 170 days, with data consumption on its network surpassing that of the US and doubling that of China.

So, just as Jio revolutionised the telecom industry, Jio Cloud aims to disrupt the cloud market by making AI services more affordable and accessible to the masses.

Jio Could be the Next Big Hyperscaler

Jio is also targeting the B2B cloud segment with their inference facilities. Earlier in 2023, Reliance Industries Limited announced its partnership with US-based chipmaker NVIDIA to advance AI in India.  This would put Jio in direct competition with the likes of AWS, Google Cloud and Microsoft Azure.

The collaboration aims to build AI infrastructure in India that they claim will be more powerful than the fastest supercomputer in the country. Further, NVIDIA said that it will provide access to Reliance with GH200 Grace Hopper Superchip and NVIDIA DGX Cloud for exceptional performance.

However, Jio is not alone in the market. Earlier this year, Yotta Data Services received the first tranche of 4,000 GPUs from NVIDIA. It further plans to scale up its GPU stable to 32,768 units by the end of 2025. And in 2023, Tata Communications entered into a partnership with NVIDIA to develop a similar hyperscale infrastructure.

According to sources, the upcoming NVIDIA India Summit is poised to reveal an expansion of the chipmaker’s collaboration with Reliance. Speculation is around the introduction of the Blackwell GPU and the NIM platform, with potential plans for employee training and upskilling in AI.

Interestingly, NVIDIA has also recently collaborated with Infosys to bring AI-driven, customer-centric solutions to the telecommunications industry. The question now is, who’s the telecom giant set to benefit from this AI revolution?

The post Reliance Jio Free 100 GB AI Cloud Storage to Take on Google Cloud appeared first on AIM.

]]>
Finacus Solutions and pi-labs Develops World’s First eKYC Solution Resistant to Deepfake Frauds https://analyticsindiamag.com/ai-news-updates/finacus-solutions-and-pi-labs-develops-worlds-first-ekyc-solution-resistant-to-deepfake-frauds/ Fri, 30 Aug 2024 11:38:27 +0000 https://analyticsindiamag.com/?p=10134160

This partnership signifies a pivotal moment in the application of AI within the eKYC process.

The post Finacus Solutions and pi-labs Develops World’s First eKYC Solution Resistant to Deepfake Frauds appeared first on AIM.

]]>

Finacus Solutions, a company specialising in banking technology, and pi-labs.ai, a startup focused on AI-powered deepfake detection, have joined forces to develop a new e-KYC (electronic Know Your Customer) solution. This collaboration aims to improve the security of e-KYC by addressing the growing concern of deepfake fraud within the financial industry.

This partnership marks a significant step forward in the e-KYC landscape. The new solution integrates pi-labs.ai’s deepfake detection technology into Finacus Solutions’ existing e-KYC framework.

Addressing the Rise of Deepfake Fraud

The Reserve Bank of India (RBI) has mandated live video for KYC procedures, supplemented by enhanced security measures. However, the emergence of deepfake videos in Video KYC presents a formidable challenge, particularly for credit and loan applications.

This partnership signifies a pivotal moment in the application of artificial intelligence within the eKYC process. pi-labs.ai’s AI tools will complement the existing manual authentication process, creating a sophisticated hybrid system that merges human expertise with AI-driven insights to ensure the highest level of security. This collaboration not only strengthens the integrity of the eKYC process but also paves the way for further automation, ultimately reducing operational costs.

“The integration of Aadhar-based eKYC has drastically reduced KYC costs,” said Rahul Ayyappan, co-Founder and CTO of Finacus Solutions. “The financial industry has a vested interest in maintaining the integrity of the eKYC process. The AI-based detection capabilities provided by pi-labs.ai will offer our banking clients enhanced security and peace of mind.”

pi-labs.ai’s deepfake detection technology, ‘Authentify,’ is an AI-powered platform designed to detect deepfakes across various media formats. Utilising advanced AI algorithms, pi-labs.ai empowers enterprises to identify and mitigate fraud attempts involving deepfakes.

“Our deepfake detection technology is poised to revolutionise the eKYC process,” said Ankush Tiwari, Founder and CEO of pi-labs.ai. “The collaboration with Finacus Solutions will greatly enhance the reliability and security of eKYC, ensuring that the financial sector remains resilient against emerging challenges.”

Also Read: Banks are Working on Tech to Counter Deepfakes

The post Finacus Solutions and pi-labs Develops World’s First eKYC Solution Resistant to Deepfake Frauds appeared first on AIM.

]]>
This Bengaluru-based AI Startup Knows How to Make Your Videos Viral https://analyticsindiamag.com/ai-breakthroughs/this-bengaluru-based-ai-startup-knows-how-to-make-your-videos-viral/ Fri, 30 Aug 2024 10:30:00 +0000 https://analyticsindiamag.com/?p=10134152

The team has built a metric called "virality score", which is derived from a dataset of 100,000 social media videos.

The post This Bengaluru-based AI Startup Knows How to Make Your Videos Viral appeared first on AIM.

]]>

Editing videos can be a tedious and time-consuming task, taking video editors hours and days to get their footage ready for release. Moreover, hiring a team of video editors is not what every content creator or small company wants to invest in. This is where vidyo.ai comes into the picture. 

Launched two years ago by Vedant Maheshwari and Kushagra Pandya, the platform has experienced remarkable growth, scaling from zero to approximately 300,000 monthly active users, and achieved a revenue milestone of $2 million. Notably, a significant portion of vidyo.ai’s revenue, about 85%, comes from the US market.

Most recently, the company was part of the Google for Startups Accelerator programme. The company hasn’t raised any funding since its seed round of $1.1 million in 2022.

The team has made significant strides in addressing one of the industry’s most persistent challenges: video editing.

Maheshwari and vidyo.ai’s journey into the realm of video content and social media began over eight years ago, during which he collaborated with creators and influencers to refine their content strategies across platforms like YouTube, TikTok, and Instagram. 

It was during this period that Maheshwari identified a major pain point: the time-consuming and complex nature of video editing.

This insight led to the creation of vidyo.ai, a platform designed to streamline the video editing process. The vision was to leverage AI to handle 80-90% of the editing, leaving users with the flexibility to make final adjustments before sharing their content on social media.

The platform caters to a diverse user base, including content creators, podcasters, and businesses seeking to generate short-form content with minimal effort. “We essentially enable them to let the AI edit their videos, and then they can publish directly to all social media platforms using our software,” Maheshwari added.

How vidyo.ai Works

vidyo.ai combines OpenAI’s models with proprietary algorithms to transform raw video footage into polished content. Users upload their videos to the platform, which then processes the content through a series of OpenAI prompts and proprietary data. This includes analysing what kind of videos perform well online, identifying potential hooks, and determining effective calls-to-action (CTAs).

“We run the video through multiple pipelines, identifying key hooks and combining them to create a final video. Our algorithms then score these videos based on their potential virality,” Maheshwari elaborated on the process. This “virality score” is derived from a dataset of 100,000 social media videos, allowing the platform to suggest the most promising clips for engagement.

When compared to other video editing tools like GoPro’s QuikApp and Magisto, vidyo.ai distinguishes itself with its frame-by-frame analysis of both video and audio content. Unlike these platforms, which often edit videos based on mood or music, vidyo.ai dives deeper into the content to optimise for social media performance.

“We do a comprehensive analysis of the content, ensuring that every aspect is optimised for virality,” Maheshwari said. This level of detail, combined with the ability to publish directly across multiple platforms, provides users with a unique advantage.

Challenges and Opportunities

Despite its success, vidyo.ai faces challenges common to Indian startups, particularly in securing funding. Maheshwari noted that while Indian VCs are cautious about investing in AI, preferring application-layer solutions over foundational work, US VCs often have a more aggressive approach.

“We’ve gone from zero to $2Mn in ARR in less than two years, which is remarkable. However, raising subsequent rounds of funding in India remains challenging due to a lack of clarity on how AI investments will pay off,” Maheshwari explained, saying that the VCs of the US would be ready to invest looking at this metric alone.

He also reflected on the possibility of starting the company in the US instead of India, citing potential benefits in terms of ease of operations and investor interest. “It often feels like running a company in India comes with more challenges compared to the US,” he admitted.

When it comes to finding the moat, vidyo.ai still stands on the tough ground of maybe Instagram, LinkedIn or TikTok releasing a similar feature on the base app. “There is definitely a little bit of platform risk,” but Maheshwari explained that it is unlikely that customers would shift to that since they don’t want to restrict themselves to the workflow of a creation platform. 

Comparing it to building something like Canva, Maheshwari said that vidyo.ai plans to expand its offerings, including the potential integration of generative AI features like deep-fakes and avatars. Currently, the team is also working on building an AI-based social media calendar, which would suggest to users content that would work the best in the coming week.

Maheshwari envisions building a comprehensive suite of tools for social media creation and publishing. “Our goal is to develop a full-stack solution that encompasses every aspect of social media content creation,” he said.

The post This Bengaluru-based AI Startup Knows How to Make Your Videos Viral appeared first on AIM.

]]>
Andrew Ng and Yann LeCun Joins Korean National AI Committee As Advisors https://analyticsindiamag.com/ai-news-updates/andrew-ng-and-yann-lecun-joins-korean-national-ai-committee-as-advisors/ Fri, 30 Aug 2024 09:43:41 +0000 https://analyticsindiamag.com/?p=10134150

DeepLearning.AI’s founder, Andrew Ng will play a significant role in shaping South Korea’s AI structure.

The post Andrew Ng and Yann LeCun Joins Korean National AI Committee As Advisors appeared first on AIM.

]]>

AI educator and DeepLearning.AI founder, Andrew Ng and Meta AI chief Yann LeCun joined Korea’s National AI Committee as advisors. The announcement came following a meeting between Ng and the President of Korea and the Minister of Technology. This collaboration marks a pivotal moment for the country’s AI ambitions, as it leverages the expertise of two of the most influential figures in the field.

South Korea has been witnessing rapid development in the AI landscape marked by substantial government investment, a flourishing startup ecosystem and breakthroughs in AI chip technology. Andrew Ng’s involvement with the National AI Committee will undoubtedly accelerate these advancements.

Andrew Ng’s recent appointment to the National AI Committee comes on the heels of a series of educational initiatives launched through DeepLearning.AI. These courses cover a broad spectrum of AI topics, equipping learners with the skills to build robust AI applications, train models securely on private data, and explore efficient large language model pre-training techniques. 

South Korea’s AI Ambitions

This educational focus aligns with South Korea’s aspiration to become a global AI leader. Ng’s involvement with the National AI Committee is likely to further this goal, potentially leading to the development of new educational programs that nurture a strong domestic AI talent pool.

The country’s efforts towards AI based developments have started yielding results, South Korea has witnessed a surge in AI startups, exceeding 1,100 in 2024 alone. These startups are making waves in the field, with companies like Upstage developing sophisticated AI models such as Solar, a powerful Korean language LLM that even surpassed models from established tech giants. Align AI, another South Korean startup, has garnered recognition for its innovative AI chatbot solutions.

Andrew Ng’s involvement with the National AI Committee will undoubtedly accelerate these advancements. Ng’s extensive experience and educational initiatives through DeepLearning.AI align perfectly with South Korea’s goals. This collaboration is likely to lead to the development of new educational programs and foster a culture of AI innovation.

The post Andrew Ng and Yann LeCun Joins Korean National AI Committee As Advisors appeared first on AIM.

]]>
Meet Bython, the Python With Braces https://analyticsindiamag.com/developers-corner/meet-bython-the-python-with-braces/ Fri, 30 Aug 2024 08:45:57 +0000 https://analyticsindiamag.com/?p=10134140

Bython is Python with braces because Python is awesome, but whitespace is awful.

The post Meet Bython, the Python With Braces appeared first on AIM.

]]>

Python has long been celebrated for its simplicity and readability. However, for developers coming from languages like C++ or Java, this syntax can sometimes feel unfamiliar or challenging. This problem served as an inspiration for Bython, funnily touted as the Python with braces

Bython is a Python preprocessor which aims to provide an alternative syntax for Python that uses curly braces to define code blocks, similar to languages like C++ or Java, while still leveraging Python’s interpreter and ecosystem.

But it does not end there. The main aspect of Bython is you don’t have to worry about indentation. It won’t give you errors even if you mess up tabs/spaces or copy one piece of code to another that uses a different indentation style; it won’t break.

Javascript for Python?

Python usually gets a lot of hate for whitespaces as when you accidentally mix tabs and spaces, it gives you an error which is hard to find. A Reddit user mentioned that he loves brackets as it does not break while copying. “I spend more time tabbing than I do putting two brackets and auto formatting,” he added further. 

The key thing to note here is that Bython uses Python for interpretation, meaning existing Python modules like NumPy and Matplotlib still work seamlessly.

Jagrit Gumber, a frontend developer on X, while praising Bython, mentioned that Bython is really worth it if you come from a C C++, C# or Java background. 

Some developers prefer using braces in code because it allows them to freely cut, paste, and rearrange code sections, relying on automatic formatting tools to correctly align and indent the code afterwards, regardless of how it was initially typed.

It is also related with muscle memory of programmers who are coming from other programing languages. When you spend enough time with one programing language, you develop a habit of using curly braces and most programming languages do use braces. A developer on Hackernews mentioned that he had developed a muscle memory of using braces by pressing Ctrl + M and he was able to parse code much faster when it is both indented and with proper brackets.

Bython also allows developers to write code that is both easy to read and write, like Python, while also being able to leverage the efficiency and speed of C. “Bython provides easy ways to call C functions and leverage C libraries, more control over hardware, and performance optimisations,” said Saikat Sinha Ray while explaining how Bython can be used to call C functions. 

Some users loved Bython so much that they wanted all its features to be integrated into Python itself. A user on Hackernews mentioned said, “This shouldn’t be “Bython”, it should be Python 4.0,” suggesting the next iteration of Python 4.0 should have Bython baked into it. 

Will Bython Replace Python?

While Bython solves some key issues which bothers multiples users, it can not replace Python by any means. Bython is more like a passion project from Mathias Lohne and it is not meant to be integrated into the Python project as not everyone finds the whitespaces an issue. 

There are users who want to use Python and have no issues with its syntax. There are even users who hate the idea of using braces and find the syntax of Python better than ever. So we would suggest to think of Bython as an extension of Python which is optional and can be used if you have issues with braces and whitespaces.

The post Meet Bython, the Python With Braces appeared first on AIM.

]]>
India’s AI Startup Boom: Govt Eyes Equity Stakes and GPU Support https://analyticsindiamag.com/ai-origins-evolution/indias-ai-startup-boom-govt-eyes-equity-stakes-and-gpu-support/ Fri, 30 Aug 2024 07:30:00 +0000 https://analyticsindiamag.com/?p=10134126

Indian startups need support in terms of computing resources more than in financing.

The post India’s AI Startup Boom: Govt Eyes Equity Stakes and GPU Support appeared first on AIM.

]]>

What does it take to qualify as an AI startup? At what stage do they need financial support? And lastly, what should the ideal mode of financing be? These critical topics came up for discussion when government officials recently met with key industry figures.

The focus of the meeting was AI startups in the context of the IndiaAI Mission. 

Notable attendees of the meeting included Google, NVIDIA, and Microsoft, and representatives from AI startups such as Jivi and DronaMaps were also present, the Hindustan Times said.

It’s encouraging to see the government recognise the rapid growth of AI startups across India and acknowledge the significant role they could play in driving the country’s economy in the coming years.

On a recent podcast, Rahul Agarwalla, managing partner at SenseAI Ventures, said he witnessed about 500 new AI startups emerge in the past six months, which is a massive number.

Based on the rate at which AI startups are popping up in the country, Agarwalla believes India could soon have 100 AI unicorns. While it remains to be seen when that happens, the budding AI ecosystem in India will indeed need support from the government beyond regulatory favours.

What Qualifies as an AI Startup?

A key topic discussed at the meeting was the criteria to define an AI startup. 

Attendees highlighted to the government that simply having ‘AI’ in their name does not automatically make a startup an AI-focused company.

In response, stakeholders proposed a rating system, which builds credibility among startups and would in turn make them eligible for government funding. Not everyone will make the cut though. 

Unfortunately, in the startup world, a majority of them do not live long enough to see the light at the end of the tunnel. 

Stakeholders recommend that rather than spreading a small amount of funding thin across many startups, the government should focus on identifying those with significant potential and provide them with targeted financial support.

Earlier, the government had allocated INR 10,372 crore as part of the India AI Mission – a part of which will be used to fund startups.

Should the Government Play the VC?

According to a Tracxn report, Indian AI startups raised $8.2 million in the April-June quarter, while their US counterparts raised $27 billion during the same period.

While not many Indian startups are building LLMs, which cost billions of dollars, the funding for AI startups in India still remains relatively low.

The government, under the IndiaAI Mission, is weighing options to fund AI startups and deciding how best to do so. One bold proposal on the table was taking equity stakes in these emerging companies.

The government had previously suggested taking the equity route as part of the second phase of the designed-linked incentive (DLI) scheme for semiconductor companies. However, the thought was not well received by many in the industry. 

“[I] don’t understand the logic of the government trying to become a venture capital firm for chip design companies. This move is likely to be ineffective and inefficient,” Pranay Kotasthane, a public policy researcher, said back then.

They fear government taking equity could lead to government influence over company operations, and historically, public sector companies in India have often underperformed. Moreover, it could push other venture capitalists away.

Access to Datasets and Compute 

Stakeholders were also quick to point out that more than financing, what the startups need is help in terms of compute. 

According to Abhishek Singh, additional secretary, ministry of electronics and information technology (MeitY), the government plans to disburse INR 5,000 crore of the allocated INR 10,372 crore to procure GPUs.

The government was quick to identify the need for compute, especially for Indian startups, researchers, and other institutions. In fact, last year, the government revealed its intention to build a 25,000 GPU cluster for Indian startups. 

Interestingly, PM Narendra Modi also met Jensen Huang, the CEO of NVIDIA, the company producing the most sought-after GPUs in the market, during his visit to India in 2023.

(Source: NVIDIA)

The Indian Express reported earlier this month that the government had finalised a tender to acquire 1,000 GPUs as part of the IndiaAI Mission. These GPUs will provide computing capacity to Indian startups, researchers, public sector agencies, and other government-approved entities.

Besides access to compute, the stakeholders also urged the government to make the datasets under the IndiaAI Mission available as soon as possible. The datasets will grant startups access to non-personal domain-specific data from government ministries to train models.

Notably, the Bhashini initiative is playing a crucial role in democratising access to Indic language datasets and tools for the Indian ecosystem.

India Startup 2.0 

While the government’s recognition of the funding gap in AI startups and its willingness to provide financial support is encouraging, it is equally important that the government creates a favourable environment for these businesses to thrive.

In line with this, the government launched the Startup India programme in 2016 to foster a robust ecosystem for innovation and entrepreneurship in the country. 

This initiative was designed to drive economic growth and create large-scale employment opportunities by supporting startups through various means. Perhaps, the need of the hour is a similar programme designed specifically for AI startups.

As part of the startup programme, the government identified 92,000 startups and, in addition to funding, provided support such as income tax exemption for three years, credit guarantee schemes, ease of procurement, support for intellectual property protection, and international market access.

Moreover, over 50 regulatory reforms were undertaken by the government since 2016 to enhance the ease of doing business, ease of raising capital, and reduce the compliance burden for the startup ecosystem.

Now, a similar ecosystem needs to emerge for AI startups as well, which fosters innovation, provides essential resources, and facilitates collaboration among researchers, developers, and investors to drive growth and success in the field.

The post India’s AI Startup Boom: Govt Eyes Equity Stakes and GPU Support appeared first on AIM.

]]>
Reliance Jio Announces Free Cloud Storage with Jio AI-Cloud https://analyticsindiamag.com/ai-news-updates/reliance-jio-announces-free-cloud-storage-with-jio-ai-cloud/ Fri, 30 Aug 2024 06:34:47 +0000 https://analyticsindiamag.com/?p=10134135

Jio also marks its entry into the AI based telecommunication space with Jio Phone call AI

The post Reliance Jio Announces Free Cloud Storage with Jio AI-Cloud appeared first on AIM.

]]>

Reliance Industries Limited (RIL) chairman and managing director (CMD) Mukesh Ambani announced the launch of the Jio AI-Cloud Welcome Offer, and Jio Phone Call AI at the company’s 47th annual general meeting. 

Jio Phone Call AI is a no app based feature that allows users to record calls on their server and transcribe it too, it also allows translation for the AI generated transcription all by calling on their AI number.

“Today, to support our AI Everywhere For Everyone vision using connected intelligence, I am thrilled to announce the Jio AI-Cloud Welcome Offer,” Ambani said, with user data safely stored on the cloud, AI will be able to deliver intelligent personalised services over the network, he added.

Set to go live in Diwali, this offer will allow its Jio users 100 GB of free cloud storage space to securely store and access photos, videos, documents and other digital content and data. 

Earlier this year, Jio launched ‘Jio Brain’, positioned as the Industry’s first 5G-integrated ML platform aimed to empower telecom networks, enterprise networks and industry-specific IT environments to seamlessly incorporate ML tools into their day-to-day operations.

Akash Ambani, Chairman of Reliance Jio Infocomm Limited (RJIL) also announced the upgrade to its HelloJio, Jio STB’s voice assistant using Gen AI technology, improving its natural language processing making it feel more human-like to understand Indian dialects better.

Jio Goes Big on AI

Jio is making significant strides in the field of AI, looking at the broader Indian AI landscape, the JioGenNext cohort highlights the potential of AI in various sectors like healthcare, banking, and agriculture. Startups like Medhini-Arficus and Dista are developing innovative solutions that address real-world problems. Additionally, the collaboration between NVIDIA, Tata Communications, and Jio Platforms will provide the necessary infrastructure for AI development in India.

Akash Ambani, the chair of Jio also announced that it is partnering with IIT Bombay for the BharatGPT initiative aimed to address the lack of large language models (LLMs) for Indic languages. BharatGPT’s open-source approach fosters collaboration and knowledge sharing, which is essential for overcoming these hurdles and establishing India as a leader in Indic LLMs.

The post Reliance Jio Announces Free Cloud Storage with Jio AI-Cloud appeared first on AIM.

]]>