AI and risk management for small businesses
Whether you love or hate AI, or are simply on the fence without a firm view, one thing is true, AI is here to stay. Even if you think you’re not “using AI”, you probably are, you just don’t realise it! AI has become so seamless that we use it without thinking.
In this article, we will share a list of those everyday AI tools and outline how to manage AI and risk management, including the legal and practical AI poses for small businesses.
We’ll cover:
How AI is embedded in your daily tasks;
Real-life examples AI risks for small business (& this includes sole traders, not only for businesses who hire employees or contractors); and
How to reduce those risks with an AI policy to suit your business protection needs; and….
… We will also answer one of our most asked questions, “What are the components of a good AI policy?”
AI and small business - how you use it more than you realise
Every day, small business owners like you use AI.
Here are just a few examples of how small business owners in Australia are currently using AI daily (how many do you use?):
Email & office software like CoPilot in Microsoft suite, Google Gemini in Google Workspace (e.g. suggested replies);
Writing assistant software like Grammarly;
Visual Design & Content Creation software like Canva (e.g. Canva Magic functionality);
Project management software like Trello, Asana , Monday, ClickUp etc;
Practice management systems like Mediportal, Splose, Smokeball, Leap, Actionstep, Clio, etc (e.g. time recording);
Accounting & Financial Management software like Xero, MYOB, QuickBooks, Sage, Freshbooks, etc
Meeting recording software like Zoom, Teams, Meet, etc;
Scribing or Notetaking software like Heidi Health, Fireflies or Otter ; and
Website integrations like Zapier, AI optimised calendars & scheduling tools, email software, chatbots,
Mobile/tablet/laptop - Siri, Alexa & Gemini voice assistants
While this is only a short list, it is more than commonplace and evidence that it’s not just the use of AI-first tools like Chat GPT, Gemini, Perplexity, and Claude that create potential risks in small businesses.
As AI development picks up pace, we can expect AI capabilities to be enhanced exponentially.
Each time you agree to the new terms and conditions on a software platform or your device, it’s likely you are agreeing to AI being used more broadly.
There are countless benefits to using AI in the examples outlined above, such as increased productivity and design. However, AI functionality also has risks. You have probably heard about inaccuracies (sometimes referred to as “hallucinations”), but there are also other risks.
We explore these below.
The biggest risks of AI in small businesses
While AI offers efficiency and innovation, it can create significant and unexpected risks for small business owners. These risks can be broadly classified into these four key areas:
Privacy breaches
Intellectual property (IP) issues
Confidentiality risks
Regulatory and legal compliance risks
Privacy breaches
Privacy concerns are top of the list for the serious legal risks associated with AI use. For example, a team member or contractor could upload personal or sensitive data to public AI platforms such as ChatGPT or image generation tools. While this use may have been unintentional or done to save time, the very real and concerning consequence is that by sharing that data with ChatGPT, it could then be available for use by other platforms or people.
While health service providers and other small businesses that trade in sensitive and personal information have privacy obligations detailed in the Privacy Act, as at the date of this article (May 2025), 90% of small businesses fall under the small business privacy threshold and are currently exempt from the added legal and compliance requirements - see more in our Privacy Blog here.
However, changes for small businesses and the requirement for a privacy policy are likely on the horizon (read about those changes here). We anticipate that it will change the landscape of privacy law in Australia and place a requirement on small businesses to have a privacy policy.
Even if your business is currently exempt, sharing identifiable information such as client names, medical or employment data or financial details in an AI tool may breach the Privacy Act. It may also mean you breach your service or client agreement, and confidential data ends up online or into unknown hands. After all, we don’t know where the data goes once we allow AI access. A large part of AI is machine learning and by using AI we are training the platforms. While this helps us as small business owners, it also helps the owners of the AI platforms.
Intellectual property issues
Some business owners assume AI generated content is safe to publish. In fact, most business owners probably don’t think about it. However, things aren’t that simple.
AI generated text, images or designs may inadvertently infringe the copyright of others.
Example 1 - song lyrics in an instagram caption
A small business posted an AI generated Instagram caption that included lyrics from a popular song, this resulted in a copyright take down notice.
Example 2 - logo theft
Another small business owner created a logo using elements taken from an artist’s work, resulting in a cease and desist letter from the artist.
These are just two examples, but spend time with an internet search engine and you will find many more.
Because AI tools are trained on enormous datasets that can include content that has copyright or is owned by a business or person (including blogs, training materials, artworks and photos), the content AI generates for you could contain protected IP without the correct attribution. Think about what happened with the recent Brooki and Nagi recipe copyright issue. That wasn’t AI but it shows just how seriously people take copyright and legal ownership.
As a small business owner, you are legally responsible for what you publish. You can’t outsource your risk to an AI platform.
Confidentiality risks
Confidentiality is not just about keeping secrets. It is also a legal obligation under your service agreements, contracts, and non disclosure agreements (NDA’s).
For example, imagine Michelle, who is employed by Steve’s Caravans. Michelle is an employee and her boss needs her to complete a report. Michelle takes sales data and enters it into AI to generate a report. Although well intentioned and a great idea because the analysis capabilities of AI are wonderful, by entering the data, Michelle breached the client’s confidentiality and the legal obligations of Steve’s Caravans under its service contract.
Even when AI tools offer features like “incognito mode”, this rarely means true anonymity. Terms and conditions of these AI platforms often disclose that user inputs may still be used to train the models of the AI platforms or stored for review. But really, how many of us and our employees or contractors read the T’s and C’s?
Legal compliance risks
AI tools are helpful with many advantages; however they don’t understand Australian law, context, or nuance (believe me, we have tested that here at Ready to Boss Legal!). And an AI platform won’t keep your business out of trouble with Fair Work or the Australian Taxation Office (ATO).
For example, back to Steve and Michelle. Think about what would happen if Steve fired Michelle for her breach of confidentiality outlined above. However, what if Steve didn’t learn from Michelle’s breach and used AI (e.g. Chat GPT) to draft Michelle’s employee termination letter. Unfortunately, the letter did not reflect the correct legal situation, and Michelle was unfairly dismissed. This meant Michelle and Steve ended up in Fair Work.
Steve and Michelle show us that AI tools can assist in a myriad of ways in a small business but they cannot interpret legal obligations under the law, understand and apply your unique facts or ensure procedural fairness or legal compliance. Only a lawyer can do that.
The problems faced by Michelle and Steve could have been avoided with a clear, tailored AI policy that was followed by staff members. Without an effective AI policy in place, your business is left exposed.
However, despite these new every day risks, it’s not all doom and gloom!
You can arm yourself, your contractors or employees with awareness, knowledge and an AI policy that guides their work and protects your business, not just from a legal, financial or compliance perspective, but reputation as well.
Getting started with AI and risk management
AI is no longer the “next big thing”. It is already here and part of everyday business operations. Whether you are using AI to help write emails, generate content, or analyse customer trends, AI and risk management begin with understanding what an AI policy should cover.
It isn’t about banning AI tools. Instead, it is about setting clear, practical boundaries for the safe use of AI, to protect you and your business. There is one essential tool which will help you do this: an AI Use Policy.
A well constructed AI policy must:
Clarify how AI can (and cannot) be used in your business;
Outline who is accountable for the misuse of AI and the consequences;
Demonstrate compliance with privacy and consumer protection laws
What are the components of a good AI policy?
A strong AI policy doesn’t need to be complicated or 30+ pages. But it does need to be clear and legally sound. The core components of a good AI Use policy include:
Purpose and scope: who the policy covers (i.e. does it cover just your employees or all contractors, customers and collaborators you work with);
Definitions: It is important to make sure the policy covers all of the 'AI tools' and ‘AI’ you work with, including whether it is generative AI, diagnostic analytics, or something else;
Acceptable use: clear rules around what is permitted and what is not, such as no personal or confidential data must be used in public AI tools;
Approval process: who can approve AI-generated output (and there should always be human approval) and who publishes content;
Privacy policy alignment: how the AI Use Policy fits in with your privacy policy and where to find both documents - we suggest they go on the foot of your website;
Consequences for breaches: what will happen if someone breaches the AI Use Policy; and
Integration: how does it connect with your website terms of use, privacy, social media and IT policies.
Now, before your head starts spinning, we have a number of AI policies to suit a range of small businesses that make this process manageable and cost-effective. You’ll find a list of them below.
Implementing your AI policy
Once you have a policy, the next steps are essential.
Communicate the AI policy clearly to your team.
Provide your staff or contractors with a short training or onboarding on acceptable AI use. Note that short onboarding or FAQ style sessions are often enough
Update contracts (employment agreement and contractor agreement) and procedure manuals to include references to the AI policy.
Review the AI policy regularly as laws and tools evolve. Your policy should keep up.
Transparency with clients and telling them you use AI
It is not just your team who needs to know about your AI policy, your clients do too.
Just like your privacy policy or website terms of use, letting clients know that you may be using AI tools (for things like transcripts, chat bot interactions etc) is a vital step towards transparency and trust.
This disclosure can be made in:
Your client (or service) agreement;
Your Website Privacy Policy and terms of use; and
A standalone AI Use Policy or section on your website.
AI and risk management
Here are a list of our AI policy templates for use with your team and add to your website:
AI usage policy template for small business
AI policy for law firms
We also have addendums (add on clauses to add to existing agreements) suitable for those of you who have bought these templates from us:
VA Service Agreement;
Coaching Agreement for Clients (covers AI use for meeting transcripts and summaries etc.)
Chatbot Build Agreement
Contractor Agreements
Subcontractor Agreements
NDIS service agreements
Copywriter Service Agreements
Graphic Designer Agreements
If you are a graphic designer or website agency, these agreements have AI policies included:
AI and risk management needn’t be hard. You can work with your team to identify the types of AI they already use and get your business protected. Don’t wait for a privacy complaint or an employee dispute to get one in place.
AI is here to stay. When used correctly, it can absolutely supercharge your small business. But without clear guardrails in place, you are leaving yourself open to contract disputes, legal claims and risk.
The good news? Managing those risks doesn’t have to be hard, make sure you talk to your team about how they are already using AI, choose the correct AI policies and implement them.
Related: Privacy law for small business owners in Australia
Ready to Boss Legal is a legal publisher. We are not a law firm and you acknowledge that by purchasing, downloading and customising this template, Ready to Boss Legal is not acting as your lawyer or providing you with legal advice. This article is legal information only and should not substitute for or constitute professional legal advice. We recommend you consult with a lawyer for legal advice, noting Ready to Boss Legal is a legal publisher and not a law firm. All copyright in this article belongs to Ready to Boss Legal. If you share it we ask that you acknowledge us as the source.