Artificial Intelligence: Best Practices for Entrepreneurs and Business Owners

1024 x 700 px Blog Image 19

It only took two months for OpenAI’s ChatGPT to become the fastest-growing consumer application in history.[1] This “technological revolution” made it on headlines worldwide sparking much debate, controversy, and confusion around artificial intelligence (“AI”). Put simply, an AI can “think” and “act” in ways that previously only humans could – resulting in the technology being able to interpret, learn, and make decisions with very limited to no human involvement.[2] While this technology is poised to bring about transformational change to the way in which we live and work, it is important to keep the Greek myth of Icarus in mind when using AI technology. Our overconfidence in this technology will blind us to its potential hazards.

As an aside, in this blog, “system”, “chatbot”, “tool”, “programme”, “technology”, “GenAI”, “application”, “platform”, and “model” will be interchangeably used to refer to AI.

“Algorithms” vs “Artificial Intelligence”

It is important to draw a distinction between algorithms and artificial intelligence. An algorithm is a sequence of instructions or rules to be performed to solve a defined problem. AI, as a term, is used much more expansively. It refers to a group of algorithms used to instruct and modify itself to create new algorithms.[3] In actuality, we encounter algorithms everywhere. We find them, for example, in search engines, the recommendations we get on social media, digital voice assistants, and online banking and fraud detection. As a matter of fact, in this day and age, we live in a symbiotic relationship with algorithms (often running in the background).

Defining the Undefinable

There is no universal definition of AI.[4] It was first coined as a term by Professor John McCarthy in 1955, which he would later define as “the science and engineering of making intelligent machines.”[5] Keeping in mind that AI can be defined in many ways (and always evolving), it can, in its simplicity, be understood as an algorithm-based technology capable of carrying out functions and making decisions usually associated with human cognitive abilities.

Generative AI (“GenAI”)

Looking at GenAI (e.g., ChatGPT) more generally, we already see the potential it has to transform the workplace. Many believe that GenAI will lead to increased productivity, which will, in theory, end up freeing more time for employees, allowing them to focus on higher-value and complex work.[6] There may also be a rise in business performance and drive due to increased innovation and amount of work being carried out. For example, Microsoft embedded OpenAI's GPT-4 technology into their products (e.g., Microsoft 365 Copilot can instantly summarise documents, generate emails, speed up Excel analysis, and its Business Chat feature can summarise chat conversations).[7]

What Entrepreneurs & Businesses Will Need to Consider:

GenAIs use machine learning (a subset of AI) to create content and generate answers. Generally speaking, machine learning is composed of: (i) an algorithm or algorithms, (ii) the training data, and (iii) a model.[8] The algorithm learns to identify patterns after being trained on a large set of examples (the training data). Once a machine-learning algorithm has been trained, the result is a machine-learning model. The model is what individual users interact with by prompting it in the form of text, image, video, design or musical note.[9] The model then generates content which can then be tailored with feedback about the style, tone and other elements that the user would like the chatbot to include.[10]

As the market becomes increasingly saturated with AI platforms, it is important to understand how their use can impact entrepreneurs and businesses: –

Accuracy

GenAI can draft and generate summaries of existing content. But given the level of quality of its output and there being no guarantee of its accuracy, human review is still necessary. While this may free up time spent in the first instance, it will still require a degree of human participation to proof the output. As found in the “Privacy Policy” of OpenAI, “services like ChatGPT generate responses by reading a user’s request and, in response, predicting the words most likely to appear next. In some cases, the words most likely to appear next may not be the most factually accurate. For this reason, you should not rely on the factual accuracy of output from our models.”[11] Despite this, GenAI could be used as a starting point; but variation between the results of users, as it is likely that the output will be replicated elsewhere in responses to other users, will only occur if human creativity is used to curate and fine-tune the output. A simple illustration is given by OpenAI: “you may provide input to a model such as ‘What color is the sky?’ and receive output such as ‘The sky is blue.’ Other users may also ask similar questions and receive the same response.”[12] It could also be the case that the AI system is trained on data up to a certain date (e.g., ChatGPT was, until only recently, trained on data up to September 2021).[13]

Intellectual Property

GenAI can render results similar to existing IP. So, this may make it difficult to determine whether the output violates the intellectual property rights of others, as certain GenAI models were found to have been trained on IP-protected data.[14] Getty Images, for example, is claiming in proceedings that Stability AI used their copyrighted images as training data.[15] Similarly, the IP ownership over AI generated works could be contingent on the degree of human-user participation in generating the AI output.[16] In addition, the IP provisions in the T&Cs of the model will need to be read.

Confidentiality

The data that a user inputs into the chatbot could be used to improve the model’s performance. It is important to avoid any risk by not entering sensitive or identifiable information, such as medical records or financial data, into the AI. While OpenAI’s “Privacy Policy” does say that they “may aggregate or de-identify [p]ersonal [i]nformation so that it may no longer be used to identify you and use such information to analyze the effectiveness of our [s]ervices, to improve and add features to our [s]ervices, to conduct research and for other similar purposes”, this should not be taken as being absolute and is rather entirely at their discretion.[17] As such, caution is needed when processing sensitive or identifiable data (…or even your next big idea or development) as it may result in unintended exposure.

Veil of Obscurity

It is generally the case that the behaviour of AI models is not transparent or understandable and are rather referred to as being a “black box”. What this means is that AI developers put the model itself or the training data in a black box, in effect obscuring the data and protecting the intellectual property.[18] This often makes it difficult, and rather impossible, to grasp how a conclusion was reached. This is significant and becomes of particular concern when AI is used in sensitive/critical environments (e.g., in HR, medical diagnostic systems, or in financial decision-making).

Hallucinations

Much like humans, AI systems are prone to “hallucinations”. This occur when the AI observes patterns or objects that are non-existent, resulting in nonsensical conclusions.[19] This includes, but is not limited to, the AI inventing facts by presenting sources, contexts or events that are not in fact true or are self-contradicting. Only recently, for example, a New York attorney presented fabricated case citations in a legal brief based on answers generated by ChatGPT.[20] These hallucinations are often caused by certain limitations, for instance biases in or low-quality training data, deficient programming or due to a lack of user provided context.[21] And since many are black box AIs, this only exacerbates the problem.

Bias

It is flawed to think that algorithms are objective. GenAI is often biased and opinionated which can creep into algorithms in several ways. Much like what has already been discussed, AI systems make decisions based on their training data, which can include biases or inclinations. Those who design AIs may incorporate filters aimed at avoiding giving answers to certain questions. There has been extensive research that some chatbots have evident left-leaning political biases.[22] This is concerning if one takes the generated material at face value.

These are only some of the ways in which using AI models can prove to be dangerous and difficult to navigate. It is often the case that we put our complete trust in technology without questioning or doubting it. GenAI is not in fact really that intelligent. It is instead all about the confidence in which AIs output their answers, often making it tough to recognise inaccuracies. Otherwise said, an AI is only as smart and reliable as its user.

MASTER 600 x 408 px About MBM 2
MASTER 600 x 408 px About MBM 8
MASTER 600 x 408 px About MBM 2
MASTER 600 x 408 px About MBM 8

The Regulatory Framework

In a policy paper presented to Parliament by the Secretary of State for Science, Innovation and Technology in March 2023, the UK Government tried to define, but in reality rather illustrated, AI by addressing two characteristics - namely its “adaptivity” and “autonomy”.[23] As opposed to the EU’s forthcoming AI Act which classifies the level of risk an AI could pose to the health and safety or fundamental rights of a person; the UK endeavours to “future-proof” the country’s approach with a pro-innovation strategy.[24] What this means is that the UK will not establish any blanket rules and will avoid any legal definitions which could potentially stifle growth of the AI industry.[25] Rather, the UK Government will rely on existing regulators, like the Financial Conduct Authority and the Competition and Markets Authority, to issue guidance to businesses within their purview. Existing regulators will need to be aware of the five cross-sectoral principles: (i) safety, security, robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.[26] However, delegating such powers could cause inadvertent difficulties by making the regulatory landscape potentially more difficult for businesses, especially for growing ones, to navigate. Moreover, it is not uncommon, particularly for larger companies, to be present in multiple sectors of industry – and as such to be the receiver of AI services in one and the provider in another. Be that as it may, the regulator-led approach received support from the industry during the consultation on the AI Regulation Policy Paper.[27]

AI Usage Policy in your Business

As AI is becoming increasingly pervasive and perhaps more integral to everyday business operations, it will be important for companies to establish clear internal AI policies for their use. Such internal governance measures should be in place to ensure effective oversight of AIs used in the workplace by employees and other members of staff, with clear lines of accountability. This will potentially minimise legal and regulatory risks by ensuring ethical and responsible use of data. However, internal usage policies are not a one-size fits-all and adapting it to the varying business and industry practices will be paramount.

If you would like to learn more about how you can protect your business with a policy, please get in touch with a member of our Corporate team. We can tailor a policy to your needs or your company’s specific circumstances.

Conclusion

AI continues to advance, gradually improving their ability to engage in human-like dialogue – with some promising (like OpenAI) that their chatbot will soon be able to have voice conversations with users.[28] This blog sought to provide a glimpse, which by no means is complete, on how to familiarise and approach AI technology. As companies continue to integrate and rely on AI in their workplace and business strategy, it will be inevitable that risks and opportunities arise and due regard must be had to mitigate this from both individual-user to, a more general, business-industry perspectives.

Note: The UK will host the world’s first summit on AI safety in early November 2023, which may provide further insight as to how international coordinated action will be taken to set measures for furthering safety in global AI use.[29]


References

[1] K Hu, ‘ChatGPT sets record for fastest-growing user base - analyst note’ (Thomson Reuters, 2 February 2023) https://www.reuters.com/techno... accessed 25 October 2023.

[2] B Marr, ‘What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business?’ (Forbes, 10 August 2020) https://www.forbes.com/sites/b... accessed 23 October 2023.

[3] K Ismail, ‘AI vs. Algorithms: What's the Difference?’ (CMSWire, 26 October 2018) https://www.cmswire.com/inform... accessed 21 October 2023.

[4] ‘What's AI?’ (Council of Europe) https://www.coe.int/en/web/art... accessed 25 October 2023 and M O'Shaughnessy, ‘One of the Biggest Problems in Regulating AI Is Agreeing on a Definition’ (Carnegie Endowment for International Peace, 6 October 2022) https://carnegieendowment.org/... accessed 25 October 2023.

[5] John McCarthy, ‘What Is Artificial Intelligence?’ (2007) Stanford University http://jmc.stanford.edu/artifi... accessed 23 October 2023.

[6] B Marr, ‘Boost Your Productivity with Generative AI’ (Harvard Business Review, 27 June 2023) https://hbr.org/2023/06/boost-... accessed 21 October 2023.

[7] L Mearian, ‘Microsoft: 365 Copilot chatbot is the AI-based future of work’ (Computerworld, 16 March 2023) https://www.computerworld.com/... accessed 24 October 2023.

[8] S Bagchi, ‘Why We Need to See Inside AI’s Black Box’ (Scientific American, 26 May 2023) https://www.scientificamerican... accessed 23 October 2023.

[9] G Lawton, ‘What is generative AI? Everything you need to know’ (TechTarget, 2023) https://www.techtarget.com/sea... accessed 23 October 2023.

[10] Ibid.

[11] ‘Privacy policy’ (OpenAI, 23 June 2023) https://openai.com/policies/pr... accessed 23 October 2023.

[12] ‘Terms of use’ (OpenAI, 14 March 2023) https://openai.com/policies/te... accessed 23 October 2023.

[13] A Radford and Z Kleinman, ‘ChatGPT can now access up to date information’ (BBC, 27 September 2023) https://www.bbc.co.uk/news/tec... accessed 23 October 2023.

[14] G Appel, J Neelbauer, and D Schweidel, ‘Generative AI Has an Intellectual Property Problem’ (Harvard Business Review, 7 April 2023) https://hbr.org/2023/04/genera... accessed 23 October 2023.

[15] B Brittain, ‘Getty Images lawsuit says Stability AI misused photos to train AI’ (Reuters, 6 February 2023) https://www.reuters.com/legal/... accessed 25 October 2023.

[16] J Umeh, ‘AI versus IP: how could generative Artificial Intelligence impact intellectual ownership?’ (British Computer Society, 29 March 2023) https://www.bcs.org/articles-o... accessed 25 October 2023.

[17] ‘Privacy policy’ (OpenAI, 23 June 2023) https://openai.com/policies/pr... accessed 23 October 2023.

[18] S Bagchi, ‘Why We Need to See Inside AI’s Black Box’ (Scientific American, 26 May 2023) https://www.scientificamerican... accessed 23 October 2023.

[19] ‘What are AI hallucinations?’ (IBM) https://www.ibm.com/topics/ai-... accessed 23 October 2023.

[20] S Merken, ‘New York lawyers sanctioned for using fake ChatGPT cases in legal brief’ (Reuters, 26 June 2023) https://www.reuters.com/legal/... accessed 25 October 2023.

[21] E Glover, ‘What Is An AI Hallucination?’ (BuiltIn, 2 October 2023) https://builtin.com/artificial... accessed 23 October 2023.

[22] J Baum and J Villasenor, ‘The politics of AI: ChatGPT and political bias’ (Brookings, 8 May 2023) https://www.brookings.edu/arti... accessed 20 October 2023.

[23] Secretary of State for Science, Innovation and Technology, A pro-innovation approach to AI regulation (CP 815, 2023).

[24] ‘EU AI Act: first regulation on artificial intelligence’ (European Parliament, 14 June 2023) https://www.europarl.europa.eu... accessed 25 October 2023.

[25] Secretary of State for Science, Innovation and Technology, A pro-innovation approach to AI regulation (CP 815, 2023), paragraph 41.

[26] Secretary of State for Science, Innovation and Technology, A pro-innovation approach to AI regulation (CP 815, 2023), paragraph 48.

[27] Secretary of State for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI (CP 728, 2022) and Secretary of State for Science, Innovation and Technology, A pro-innovation approach to AI regulation (CP 815, 2023), paragraph 67.

[28] A Radford and Z Kleinman, ‘ChatGPT can now access up to date information’ (BBC, 27 September 2023) https://www.bbc.co.uk/news/tec... accessed 23 October 2023.

[29] Department for Science, Innovation and Technology, Prime Minister's Office, 10 Downing Street, Foreign, Commonwealth & Development Office, The Rt Hon Michelle Donelan MP, The Rt Hon Rishi Sunak MP, and The Rt Hon James Cleverly MP, ‘Iconic Bletchley Park to host UK AI Safety Summit in early November’ (24 August 2023) https://www.gov.uk/government/... accessed 25 October 2023.

Start your journey with us today

MBM Commercial will only use your personal information to answer your query and to provide the products and services you requested from us. You can unsubscribe from these communications at any time. For more on how we are committed to protecting and respecting your privacy, please see our Website Privacy Policy.
You must enable javascript to view this website