top of page
Writer's pictureJulie Mehrdawi

Legal Risks of Using AI in Hiring: What Employers Need to Know

Artificial Intelligence (AI) is rapidly transforming how recruitment operates, offering companies a more streamlined and efficient way to assess candidates. However, if not properly managed, AI can expose businesses to significant legal risks, including discrimination claims, privacy violations, and compliance issues. Employers must understand these risks and implement safeguards to ensure AI tools are used responsibly and lawfully. 


AI Discrimination and Bias: A Major Legal Risk 


One of the most significant concerns with using AI in recruitment is the potential for unlawful discrimination. Australian laws like the Racial Discrimination Act 1975 (Cth), Sex Discrimination Act 1984 (Cth), and Age Discrimination Act 2004 (Cth) prohibit discrimination based on attributes such as race, gender, or age. These regulations apply to AI systems just as they do to human decision-makers. If an AI tool inadvertently discriminates against a candidate based on a protected attribute, the employer can be held liable, even if the discrimination was unintentional. 


The risk often arises from the AI system’s training data. If the data used to build the AI model reflects past hiring biases—such as a preference for male candidates in senior roles—the AI may replicate these biases in future decisions, disadvantaging female candidates. A prominent example is Amazon’s AI recruitment tool, which was discontinued after it was found to favour male applicants because it had been trained on a data set that skewed heavily towards men. 


To mitigate this risk, employers should conduct regular audits of their AI systems to identify and correct any discriminatory patterns. Retraining AI models using balanced data and implementing human oversight in key decisions are also critical strategies to prevent biased outcomes. 


The "Black Box" of Transparency and Accountability 


AI systems, particularly those using complex machine learning algorithms, can function like a “black box,” making it difficult for even the developers to understand how decisions are made. This lack of transparency becomes a serious legal issue if a candidate challenges a hiring decision or lodges a complaint of discrimination. 


Under the Fair Work Act 2009 (Cth) and various anti-discrimination laws, candidates are entitled to understand the reasons behind employment decisions that impact them, especially if they believe they were treated unfairly. If an employer cannot provide a clear, understandable explanation for why a candidate was rejected—because the AI’s decision-making process is too opaque—they could be held liable. A lack of transparency not only complicates compliance but also undermines trust in the business. 


Privacy and Data Protection Concerns 


AI recruitment tools often require large amounts of candidate data to operate effectively, which can raise privacy and data protection issues. Under Australia’s Privacy Act 1988 (Cth), employers must collect only the data necessary for recruitment, store it securely, and use it for legitimate purposes as outlined in the Australian Privacy Principles (APPs). 


But how does AI get this information?


Many systems go beyond basic resume data by gathering information from various online sources. For example, AI tools can automatically pull data from social media profiles like LinkedIn, professional websites, or other public databases. In some cases, AI may collect insights from candidates’ social media posts or blogs, including topics related to their interests, experiences, or community activities. 


More sophisticated AI tools can even make inferences about personal attributes, including health, by analysing language patterns or detecting mentions of health-related topics in public posts. For instance, if a candidate has shared articles on mental health or participated in charity events for specific illnesses, the AI might flag these as potential indicators. This ability to piece together information from different sources means that, even if employers don’t intentionally seek out sensitive data, AI may inadvertently collect or infer it. 


To prevent privacy breaches, businesses must inform candidates clearly about what data is being collected, obtain their consent where needed, and ensure that both their own AI tools and any third-party providers fully comply with Australia’s privacy regulations. 

 

The Future of AI Regulation in Employment 


As AI continues to develop, regulators are placing more scrutiny on its use in hiring. The European Union’s proposed AI Act classifies employment-related AI tools as “high risk,” enforcing stringent compliance requirements. Australia may implement similar regulations, potentially reshaping how businesses can use AI in recruitment. 


To stay ahead, employers should establish strong compliance frameworks, keep up with legislative changes, and follow best practices to mitigate potential liabilities as the regulatory environment evolves. 



 


Profile of Sally Westlake, BlackBay Lawyers Associate.

ABOUT THE AUTHOR


Julie Mehrdawi is a passionate and dedicated member of our team, excelling in Commercial Litigation, Corporate Law and Regulatory Advice, Employment Law and Defamation Law. Julie’s commitment to staying at the forefront of legal advancements in our clients' industries ensures she is able to successfully identify and mitigate legal risks, safeguarding our clients interests.

Julie’s takes great pride in her approachable nature, which allows her to collaborate closely with clients and provide tailored, expert legal advice and representation that allows her clients to succeed.


Julie is able to leverage her diverse background and think outside the box to deliver comprehensive, pragmatic, and holistic solutions for her clients in every area of law.

bottom of page