AIML – Global Tech News https://g-technews.com Thu, 13 Apr 2023 13:01:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://g-technews.com/wp-content/uploads/2023/03/favicon2.png AIML – Global Tech News https://g-technews.com 32 32 Francis the Great Calls for Ethical AI https://g-technews.com/2023/04/11/francis-the-great-calls-for-ethical-ai/ https://g-technews.com/2023/04/11/francis-the-great-calls-for-ethical-ai/#respond Tue, 11 Apr 2023 06:17:37 +0000 https://g-technews.com/?p=1207 Although he acknowledged the advantages, he cautioned against careless use.

In a speech at the annual “Minerva Dialogues,” which are arranged by the Vatican’s Dicastery for Education and Culture, Pope Francis made a formal appeal for the moral and responsible use of technology, including artificial intelligence (AI). The Pope acknowledged the advantages of AI when it was applied for the greater good, but he also cautioned against immoral or reckless application.

This request for the ethical and prudent use of AI comes a week after pictures of the Pope looking sharp in a white puffer coat deceived lots of people. Some news sites referred to it as “one of the first instances of widespread artificial intelligence-related misinformation” due to the photos produced using the AI application Midjourney. The images created controversy online.

In order to explore the social and cultural effects of digital technologies, with an emphasis on AI, the assembly brings together experts from a wide range of professions, including scientists, engineers, business executives, attorneys, theologians, and ethicists. The development of artificial intelligence and machine learning, according to Pope Francis, “has the potential to contribute in a positive way to the future of humanity.”

The Pope stressed the significance of those working on these technologies making a “constant and consistent commitment” to behave morally and responsibly. He applauded the general agreement that “development processes” must uphold principles like inclusivity, transparency, security, equity, privacy, and dependability. Additionally, he applauded international bodies’ efforts to regulate new technologies in order to ensure that they foster “genuine progress” and “an inherently higher quality of life” overall.

]]>
https://g-technews.com/2023/04/11/francis-the-great-calls-for-ethical-ai/feed/ 0
ChatGPT Has Your Data in Its Crosshairs https://g-technews.com/2023/04/11/chatgpt-has-your-data-in-its-crosshairs/ https://g-technews.com/2023/04/11/chatgpt-has-your-data-in-its-crosshairs/#respond Tue, 11 Apr 2023 05:57:33 +0000 https://g-technews.com/?p=1204 Your input into ChatGPT is saved on servers run by OpenAI, where it may be used for their advantage.

While using ChatGPT to swiftly fix bugs in their source code recently, engineers from Samsung’s Semiconductor group unintentionally exposed crucial information. There have been three documented instances of employees using the programme to disclose sensitive information in less than a month.

In one of the occurrences, a staff member requested that ChatGPT optimise test sequences for locating chip defects. In another instance, a worker made a presentation from meeting notes using the technology.

Coincidentally, the leaks were discovered only three weeks after Samsung lifted a previous prohibition on its staff using ChatGPT due to concerns over precisely this problem. Samsung has now advised its staff not to use the chatbot as it is clear that efforts to recover the data that has already been gathered would be difficult.

A problem recently exposed information on ChatGPT members, including their personal and billing information as well as details of their chat history. In addition, ChatGPT informed 1.2% of its customers that their billing information, which included their first and last names, billing address, credit card type, credit card expiration information, and the last four digits of their credit card, may have been seen by another customer during a 9-hour power outage on March 20. 

The breach was caused by a defect in the Redis client open-source package redis-py, according to an internal study by OpenAI. 

“We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released,” said Sam Altman, CEO and co-founder of OpenAI, in a tweet.

A few people have the ability to view the titles of other users’ previous conversations, he continued.

Even with its premium ChatGPT Plus subscription, OpenAI stated that they will no longer be retaining user data for the purpose of training the model, but users would need to opt out for that to happen. Additionally, the data wouldn’t be destroyed until one month had passed. 

The information you enter into ChatGPT is therefore saved on OpenAI servers where it may be used, in their words, “to develop new programmes and services” or forwarded to Microsoft. 

The constant stream of occurrences highlights the hazards associated with the efficiency that these tools help to attain, and the obvious question is how to reduce the possible risks of employing them in an environment where sensitive data is frequently handled.

Or is it a ban?

Because the chatbot does not adhere to the EU’s General Data Protection Regulation, which provides the “Right to be Forgotten,” ChatGPT was given a temporary ban in Italy last month. There is currently no method in place that allows users to ask that their data be removed from machine learning systems after it has been used to train the model.

The Indian government also stated last week that it has assessed the ethical issues with AI, such as bias and privacy, while taking steps to create a strong framework for regulations in the AI industry. However, it has not yet announced any plans to implement laws. 

OpenAI has, in turn, placed the responsibility for resolving these issues with enterprises. For instance, Samsung has decided to create its own internal AI for usage by staff members, while limiting the length of ChatGPT prompts for staff members to a kilobyte or 1024 characters of text. 

Employing the ChatGPT API in place of the tool is another way for businesses to get around this problem. Since the API is a for-profit service, OpenAI cannot access any of the data you provide into it. Additionally, OpenAI has provided a form in their terms of service that allows you to voluntarily opt-out of data tracking.

]]>
https://g-technews.com/2023/04/11/chatgpt-has-your-data-in-its-crosshairs/feed/ 0
On AI Regulation, India Retreats. Then Why? https://g-technews.com/2023/04/10/on-ai-regulation-india-retreats-then-why/ https://g-technews.com/2023/04/10/on-ai-regulation-india-retreats-then-why/#respond Mon, 10 Apr 2023 13:24:03 +0000 https://g-technews.com/?p=1201 The Indian government views AI as a “kinetic enabler,” and it wants to use it to improve governance.

There is nothing short of a civil war among researchers as a result of the intense debate in the AI community over the urgent need for AI regulation. The Indian administration has sparked controversy by going against the grain in such a sensitive environment. The Ministry of Electronics and IT (MeitY) stated in a written response to a question in the Lok Sabha that “the government is not considering bringing a law or regulating the growth of artificial intelligence in the country.” 

Italy was the first Western nation to outlaw ChatGPT because of privacy concerns. In the meantime, this year sees the introduction of the eagerly awaited AI Act by the European Union (EU). The US government also published a draught version of an AI Bill of Rights. 

Why won’t India regulate artificial intelligence?

The Indian government has adopted a proactive approach to technology, especially AI, with the goal of establishing India as a world leader in this area. The Indian government sees artificial intelligence as a “kinetic enabler” and wants to use technology to improve governance. 

“The government is harnessing the potential of AI to provide personalised and interactive citizen-centric services through Digital Public Platforms,” MeitY wrote in the written answer. According to the administration, enforcing strict laws could impede innovation. For instance, many people believe that the EU’s draught of the AI Act is overly onerous. According to Robin Röhm, founder of Genie AI, it would “put a lot of unnecessary bureaucracy over companies that are innovating quickly.”

Europe will be the first region in the world to design legislation specifically for AI, as there are now no regulations on AI in place globally. Instead of rushing to control AI, India should wait and monitor the situation.

Not at all means there are no regulations.

Even while the Indian government has for the time being refused to regulate AI, this does not imply that there are not any checks and balances in existence. The MeitY did mention that a number of state and federal departments and agencies had started working to standardise ethical AI development.

The National Strategy for AI (NSAI), which was unveiled in June 2018, highlights the ethical concerns about AI that the government has also acknowledged. Additionally, the technology and its developers will be governed by current and prospective legislation. 

No regulation at all does not imply this.

While the Indian government has temporarily rejected regulation of AI, this does not imply that there aren’t any checks and balances in place. The MeitY did mention that many ministries and agencies of the federal and state governments have started working to standardise ethical AI development.

The National Strategy for AI (NSAI), which was published in June 2018, acknowledges that the government has ethical concerns about AI. Furthermore, the technology will be governed by current and prospective regulations, as will those who developed it. 

]]>
https://g-technews.com/2023/04/10/on-ai-regulation-india-retreats-then-why/feed/ 0
LLMs’ Environmental Effects https://g-technews.com/2023/04/10/llms-environmental-effects/ https://g-technews.com/2023/04/10/llms-environmental-effects/#respond Mon, 10 Apr 2023 13:17:12 +0000 https://g-technews.com/?p=1198 The carbon emissions from GPT-3 were 500 times more than those from a flight from New York to San Francisco.

The use of LLMs has given rise to a further concern: the effect of training these models on the environment. Unbelievably, training huge models results in hundreds of tonnes of carbon dioxide being released into the atmosphere. The carbon dioxide-equivalent emissions created by GPT-3 reached at 502 tonnes in 2022, the highest when compared to similar-parameter trained models, according to the sixth edition of AI Index Report 2023 issued by Stanford University. 

The analysis hasn’t taken into account the most recent GPT-4 model, which might make things worse in this case. Notably, OpenAI has kept the size of its parameters a secret from the general population. Different metrics are used by researchers to estimate the carbon emissions produced by AI systems. This covers the quantity of parameters required to train the model, the efficiency of power utilisation in data centers, and the carbon intensity of the grid. The environmental impact, carbon emissions, or even parameter size were not mentioned by OpenAI in its most recent technical publication.

The GPT-3 model had the greatest emission when four LLM models were examined for the AI Index Report. It outperformed Gopher, an open-source model developed with substantial 280B parameters. With the same parameters as GPT-3, the multilingual language open model BLOOM produced 25 tonnes of carbon in 2022, which was 20 times less than GPT-3. The open pre-trained language model (OPT) from Meta used the least energy and generated only one-seventh as much carbon dioxide as GPT-3. 

AI to Cut Down on Energy?

To tackle the high levels of energy consumption in AI systems, AI is currently being tested. While developing LLM models will require energy, experiments using reinforcement learning for managed commercial cooling systems have been made. Data centre energy efficiency is a goal of new reinforcement learning models like DeepMind’s BCOOLER (BVE-based Constrained Optimisation Learner with Ensemble Regularization). 

Two actual energy-saving facilities have been the subject of live trials by DeepMind and Google. At the two experiment sites, the experiment revealed energy reductions of 9% and 13%.

Train with a weaker GPU

There are initiatives underway to reduce the large carbon footprints that LLM models produce. Studies on lowering the computation required to run these models have been taken into consideration. FlexGen, a high throughput generation engine for executing huge language models with constrained resources like a single commodity GPU, was recently released by AI research students. FlexGen searches for the most effective way to store and retrieve tensors using a linear programming optimizer. FlexGen can boost throughput by compressing these weights and enabling bigger batch sizes. FlexGen was able to run OPT-175B on a single 16GB GPU with a high throughput. 

DistilBERT, a method for NLP Pre-Training that allows the training of any question-answering system or models using a single GPU, is a “distilled version” of BERT. DistilBERT is a more lightweight, quick, and affordable variation of BERT. It uses 40% less parameters and runs 60% faster while maintaining over 95% of BERT’s performances. 

Because there are fewer training parameters required for smaller models, there may be fewer emissions as a result of this breakthrough. A foundation model by Meta AI called LLaMA, with parameters ranging from 7B to 65B, was released. Despite being ten times smaller than GPT-3, the LLaMA-13B is said to outperform it. 

]]>
https://g-technews.com/2023/04/10/llms-environmental-effects/feed/ 0
Launch of the GPT-4-Powered Chatbot Builder by Exotel  https://g-technews.com/2023/04/10/launch-of-the-gpt-4-powered-chatbot-builder-by-exotel/ https://g-technews.com/2023/04/10/launch-of-the-gpt-4-powered-chatbot-builder-by-exotel/#respond Mon, 10 Apr 2023 13:12:54 +0000 https://g-technews.com/?p=1195 Numerous sizable firms have already implemented the bot, claims the business.

ExoMind, a user-friendly, no-code tool that enables businesses to build their own sophisticated chatbots in a matter of minutes, was recently introduced by customer conversational platform “Exotel.” 

ExoMind is based on GPT-4’s Large Language Model (LLM), and numerous large corporations have already implemented the bot, according to the business.

Once customized, these bots can be used for marketing, sales, customer service, troubleshooting, and a variety of other communication channels like WhatsApp, the web, and more.

According to Shivakumar Ganesan, co-founder and CEO of Exotel, “personalized conversational commerce is the future of customer engagement, so we are thrilled to be launching ExoMind to help businesses grow and keep their customer base.”

Ganesan stated, “We plan on continuously using the capabilities of future language-learning and even image processing models in our product line, to stay at the forefront of the customer engagement field.

Since the release of ChatGPT, the well-known chatbot from OpenAI, businesses from a variety of industries have been looking for ways to use LLMs.

An AI platform called Gupshup announced the release of “Auto Bot Builder” in January. This potent tool uses GPT-3 to automatically and easily create sophisticated chatbots that are suited to enterprise needs.

]]>
https://g-technews.com/2023/04/10/launch-of-the-gpt-4-powered-chatbot-builder-by-exotel/feed/ 0
ChatGPT Puts OpenAI in International Legal Trouble https://g-technews.com/2023/04/10/chatgpt-puts-openai-in-international-legal-trouble/ https://g-technews.com/2023/04/10/chatgpt-puts-openai-in-international-legal-trouble/#respond Mon, 10 Apr 2023 13:07:25 +0000 https://g-technews.com/?p=1192 OpenAI is being looked into by the Office of the Privacy Commissioner of Canada (OPC) in response to a complaint alleging the unauthorised gathering, use, and sharing of personal data.

The maker of the well-known chatbot, ChatGPT, OpenAI, is currently dealing with legal problems across a number of jurisdictions. If ChatGPT’s false claims that Brian Wood is now serving a prison sentence for bribery are not corrected, the mayor of Hepburn Shire in Australia may file a lawsuit against OpenAI.

Wood was shocked to learn that ChatGPT had falsely accused him of being associated with a foreign bribery scam associated with a Reserve Bank of Australia company in the early 2000s.

Wood was the one who informed the authorities about the offering of bribes to foreign officials in order to gain currency printing contracts, and although he was never charged with any crime, he did in fact work for the subsidiary, according to his legal representation.

In a statement to the media, Canadian privacy commissioner Philippe Dufresne stated that “we need to keep up with—and stay ahead of—fast-moving technological advancements.”

Italy was recently the first country in Europe to outlaw ChatGPT. OpenAI has been instructed to temporarily stop processing data belonging to Italian users by the nation’s data protection regulator.

The regulator claimed that “there appears to be no legal basis” for the extensive gathering and use of personal information to “train” the platform’s algorithms.

OpenAI, a business based in San Francisco, may be required to pay a fine of almost USD 21.8 million if it is unable to provide an explanation.

Along with Italy, ChatGPT has also been outlawed in China, Russia, and North Korea.

]]>
https://g-technews.com/2023/04/10/chatgpt-puts-openai-in-international-legal-trouble/feed/ 0
Clone of ChatGPT for Only $300 https://g-technews.com/2023/04/10/clone-of-chatgpt-for-only-300/ https://g-technews.com/2023/04/10/clone-of-chatgpt-for-only-300/#respond Mon, 10 Apr 2023 12:48:37 +0000 https://g-technews.com/?p=1189 The model weights have also been made available along with the code.

A team from UC Berkeley, CMU, Stanford, and UC San Diego developed Vicuna-13B, an open-source replacement for GPT-4, after Stanford University released the ChatGPT clone Alpaca for $600. Vicuna-13B reportedly achieves 90% of ChatGPT’s quality, and the model’s training cost was around $300. LLaMA has been used to improve the model, and it now includes user-shared talks acquired from ShareGPT.

Early evaluations using GPT-4 as a judge have shown that Vicuna-13B surpasses other models like LLaMA and Stanford Alpaca in more than 90%* of cases and achieves over 90%* of the quality of both OpenAI’s “ChatGPT” and Google’s “Bard.” Additionally, Vicuna-13B’s performance is comparable positively to Stanford Alpaca and other open-source models. The findings have sparked a great deal of interest in the field of natural language processing, especially among companies looking to take advantage of the most recent developments in AI. The researchers wrote in a blog post that developing an evaluation system for chatbots is still an open problem that needs more investigation. 

It will be interesting to see how it compares to other models like ChatGPT given the researchers’ audacious promises about its natural language processing abilities.

Vicuna-13B’s developers claim that it is capable of processing natural language better than other models like ChatGPT. While there are some similarities between the two versions, the model stands out due to its efficiency and customization options. Vicuna-13B’s performance is being eagerly watched by industry experts, who believe it will establish new standards for AI-powered language processing.

The model has some restrictions despite its outstanding powers. It has difficulty reasoning or performing mathematical calculations, for instance, and its outputs may not always be factually accurate.

The model still has to be fully improved to ensure safety or reduce any potential toxicity or bias. The developers have integrated OpenAI’s moderation API to filter out unsuitable user inputs in their online demo in order to allay these worries. 

]]>
https://g-technews.com/2023/04/10/clone-of-chatgpt-for-only-300/feed/ 0
OpenAI Defends AI Safety in New Statement https://g-technews.com/2023/04/10/openai-defends-ai-safety-in-new-statement/ https://g-technews.com/2023/04/10/openai-defends-ai-safety-in-new-statement/#respond Mon, 10 Apr 2023 12:39:00 +0000 https://g-technews.com/?p=1186 To better understand the capabilities, advantages, and hazards of GPT-4, OpenAI claimed to have delayed its deployment by more than six months.

OpenAI officially revealed its approach to AI safety, where it strives to design and deploy secure AI systems, in the midst of growing worries around the usage of GPT-4. 

The company wrote in its blog post that investing more time and money in studying efficient mitigations and alignment strategies and testing them against actual abuse is a viable way to address AI safety concerns. 

Additionally, it stated that enhancing AI skills and safety should work hand in hand. Working with its models, which are thought to be better at adhering to human instructions and being easier to steer or “guide,” has led OpenAI to conclude that its best safety work to yet. 

The most recent development occurs after more than 11,000 people signed an open letter urging the suspension of large-scale AI projects for six months, especially the training of models that are more potent than GPT-4. Additionally, ChatGPT is forbidden in several nations. Due to privacy concerns, ChatGPT was recently banned in Italy. Many other nations have now done the same, including Spain. 

To better understand the capabilities, advantages, and hazards of GPT-4, OpenAI claimed to have delayed its deployment by more than six months. It considers that occasionally going more slowly is vital to increase the security of the AI system. Additionally, it stated that in order to prevent anyone from cutting corners in order to succeed, policymakers and AI providers will need to make sure that AI development and deployment are efficiently managed globally.

To build a secure AI environment, OpenAI aims to take a collaborative approach while fostering open communication among stakeholders. It considers that this issue necessitates in-depth discussion, testing, and engagement—including the limitations of AI system conduct. 

OpenAI Issues

OpenAI acknowledged that there are limits to what can be learned in a lab and stated that they are making every effort to eliminate anticipated dangers before distribution. The business said that it is unable to foresee both the constructive and malicious uses that individuals may make of their technology. 

Because of this, OpenAI said, “we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.” OpenAI also said that it closely monitors API partners in addition to cautiously and gradually releasing new AI systems to an ever-widening audience. 

Additionally, according to OpenAI, its technology is not allowed to produce hateful, violent, harassing, or adult-themed content. According to the business, compared to GPT-3.5, GPT-4 is 82% less likely to reply to requests for content that is not permitted. 

It said that a strong mechanism had been set up to keep an eye out for abuse. The business claimed that it prevents and notifies the National Center for Missing and Exploited Children when users attempt to submit cryptic information to their image tools (DALL.E 2). 

]]>
https://g-technews.com/2023/04/10/openai-defends-ai-safety-in-new-statement/feed/ 0
The Work Models Tech Companies are Adopting https://g-technews.com/2023/04/10/the-work-models-tech-companies-are-adopting/ https://g-technews.com/2023/04/10/the-work-models-tech-companies-are-adopting/#respond Mon, 10 Apr 2023 12:30:35 +0000 https://g-technews.com/?p=1183 A hybrid work paradigm that supports and enables employees to be productive anywhere is becoming more and more popular among digital organisations across many areas as a result of the continuous discussion about work-life balance and productivity.

Companies all over the world are now changing their work practises to meet the demands of their employees and business needs as the pandemic has subsided.

A hybrid work paradigm that supports and enables employees to be productive anywhere is becoming more and more popular among digital organisations across many areas as a result of the continuous discussion about work-life balance and productivity.

The research also discusses the many work models that businesses in the tech sector are choosing across firm ages, sizes, and geographic locations.

]]>
https://g-technews.com/2023/04/10/the-work-models-tech-companies-are-adopting/feed/ 0
Jobs’ Impact of AI is Uncovered. Do You Need to Worry? https://g-technews.com/2023/04/10/jobs-impact-of-ai-is-uncovered-do-you-need-to-worry/ https://g-technews.com/2023/04/10/jobs-impact-of-ai-is-uncovered-do-you-need-to-worry/#respond Mon, 10 Apr 2023 12:25:26 +0000 https://g-technews.com/?p=1180 We are witnessing the beginning of what will replace labour and offer new employment prospects, similar to all previous technology advances.

The rapid advancement of AI over the past several months has shocked people all across the world. While some people are overjoyed by the potential of this cutting-edge technology, others are raising the alarm and urging greater caution and restraint. 

In the middle of this, a report forecasts that while the technology would boost the global GDP by 7%, it will affect around 300 million employment across major nations. The report’s predictions are based on an analysis of US and European data. But the report only covers half of the story when it comes to India.  

What is said in the report? 

The report’s main objective is to estimate the share of all work across all industries and vocations that is susceptible to automation by AI. Data from the O*NET database, which offers in-depth details on the task content of more than 900 occupations in the United States and more than 2,000 occupations in the European ESCO database, is used to measure this. 

The findings indicated that almost two-thirds of existing occupations might be partially automated by AI, with administrative (46%) and legal (44%) professions seeing the highest exposures and physically demanding professions seeing the lowest. These vocations, including construction (6%) and maintenance (4%), will be less automated by AI. This is because the report makes the assumption that considerable improvements in the integration of AI and robots are necessary in order to automate physical labor, which won’t happen anytime soon. 

The report makes an intriguing observation by extending the US and European estimations globally. According to the paper, “our estimates” “intuitively suggest that fewer jobs in emerging markets (EMs) are exposed to automation than in developed markets, but that 18% of work globally could be automated by AI on an annual basis.”

India has been least affected by AI? 

It is interesting to notice that India apparently ranks last among all the countries taken into account (with little over 10% of full-time equivalent employment exposed to automation by AI). Given how seriously Indian businesses are treating the development of AI, this is startling. Furthermore, the paper doesn’t reveal anything about the assumptions that underlie the conclusion. 

Considering India’s workforce demographics is one approach to understand these figures. The paper makes the assumption that AI won’t have an impact on the agriculture industry in developing nations like India, where it employs over 45.6% of the labour force. 

However, if we were to focus solely on the trend in the workforce distribution, Statista’s study reveals that involvement in the manufacturing and service sectors is only increasing year over year. And in these industries, the situation is not as dire as the data from Goldman Sachs suggests. In fact, according to a recent Stanford AI analysis, India has the greatest relative AI skill penetration rate out of all the nations. 

]]>
https://g-technews.com/2023/04/10/jobs-impact-of-ai-is-uncovered-do-you-need-to-worry/feed/ 0