SecurityWorldMarket

10/11/2023

Security expert warns against AI dependence

Vadodara, India

AI is dominating the headlines lately, and with some reports naming artificial intelligence as the biggest spend for nearly half of top tech businesses across the economy; a signal of just how much the AI boom is spreading.  According to a report published by the International Trade Administration, at the US Department of Commerce, the UK’s AI market is currently valued at over $21 billion, and it is estimated to grow significantly during the next few years and to add $1 trillion to the UK economy by 2035.

To shed more light on the extent and scope of this spread, Web Security company, Indusface, has gathered some insights into specifically UK businesses to determine the sectors most at risk of AI dependence, and now the company has shared its findings.

Legal, IT & telecoms, and financial sectors lead

The legal sector is one of the most prominent industries using AI, with the sector leveraging artificial intelligence for legal research, analysis, due diligence, efficiency and cost reduction. According to the survey, almost a third (30%) of law businesses across the UK state they have already adopted AI, enhancing the quality of their legal services.

IT and telecoms come in second with 29.5% of UK businesses in this sector having adopted the use of AI. The benefits for this sector include network optimisation, predictive maintenance, customer service automation, security, and data analytics, among other applications.

AI also plays a pivotal role in the finance and accounting sector, with more than a quarter (26%) of UK businesses in this field already utilising AI for tasks in the workplace. The finance sector uses artificial intelligence for fraud detection, credit risk assessment, financial reporting, tax preparation and customer service.

It’s clear to see that using AI in business offers numerous benefits, however, according to Indusface these sectors could be at risk of AI dependence, with almost six in ten (59%) Brits admitting they are concerned about the use of artificial intelligence, according to Forbes Advisor research.

Sectors powered by tech will see faster adoption of AI

Venky Sundar, Founder and President of Indusface shares his insight on the risks of using AI in business. “Any sectors powered by tech as a key element of their business, from farming to finance are at risk of relying too much on AI. Anything data heavy, requiring analytics to stay relevant (consumer engagement and insights in ecomm and retail, credit and risk rating, and consumer understanding will see faster adoption of AI than other sectors).

The risk though is that POC (proof of concept) should just be used for that purpose. If you go to market with the POC, there could be serious consequences around application security and data privacy. AI could give you code that is vulnerable to attacks, as secure coding practices are rarely followed, and you can bet that most code that AI has been trained on would be vulnerable to attacks."

Using AI to prompt could lead to serious consequences

“The other risk is with just using LLMs (large language models) as an input interface for the products. So far, the inputs that you could provide the software with were very limited, as form fields such as text boxes or drop down lists controlled most inputs into the software. But using AI to read and interpret a prompt could actually remove this predictability. This can lead to serious consequences and early risks of these “prompt injections” are being captured by OWASP (the Open Worldwide Application Security Project)."

Venky Sundar also suggests how businesses might overcome this type of risk.

“Whilst AI can help businesses conceptualise and build code, AI should still be used as a sidekick, not a dependent, especially for software development. Before companies go to market, we would still recommend following the same best practices of 1) vulnerability scanning 2) penetration testing 3) patching and 4) WAAP implementations to thwart attacks.”

Humans need to oversee to ensure diversity

“Since LLM injections are a threat, the knowledge base used to build productivity use cases, and the knowledge base used to build defence use cases on what’s not acceptable have to be separate sources that need to be trained and updated continuously. There needs to be oversight from human beings, and the people maintaining these datasets have to be diverse so that the data sets are diverse enough.”


Tags


Product Suppliers
Back to top