Is it legal to use AI in hiring? We did a deep dive into the laws by state and region, and give you everything you need to know! #braintrustair #ai #hiring https://hubs.li/Q02DC4bW0
Braintrust’s Post
More Relevant Posts
-
AI&Law | MBA | Informal IT Lawyer | Founder & CEO ThinkLegal | co-founder Mike - your AI legal assistant | co-founder Wineppy
#AI Regulation Regulation & Regulation Companies Will Navigate Complicated Regulations Companies are going to have to start thinking about what it means on the ground when customers exercise their rights, particularly en masse. What happens if you are a large company using AI to assist with your hiring process, and even hundreds of potential hires request an opt-out? Do humans have to review those resumes? Does it guarantee a different, or better, process than what the AI was delivering? We’re only just starting to grapple with these questions. #artificialintelligence #legal #compliance = #business Any ideas, suggestions or projects you want to realise with us? Write to me in DM so we can have a chat. The best is yet to come 🔥 ThinkLegal MetAIverse Accelerator Wineppy 🚀 #notforeveryone https://lnkd.in/dJqjp5x7
What to Expect in AI in 2024
hai.stanford.edu
To view or add a comment, sign in
-
🚨 Is AI Compromising Fair Hiring Practices? 🚨 AI is transforming recruitment, but with efficiency comes the risk of bias. Despite the promise of objectivity, AI can inadvertently perpetuate discrimination, posing serious legal challenges for employers under the human rights legislation. AI bias often mirrors the prejudices in its training data. For instance, Amazon’s AI tool, designed to streamline hiring, was scrapped after it favoured male candidates because the technology field is dominated by men, and thus, the patterns in successful candidates led the system to learn that male candidates were preferable. This kind of bias isn't just a tech glitch—it's a potential legal minefield leading to unintended discriminatory hiring practices based on prohibited grounds. Read our latest article to understand: - How AI bias manifests in recruitment - Real-world examples of AI failures - Strategies to mitigate bias in AI Stay informed, stay compliant. Click here to dive deeper into this critical issue. ⬇️ https://lnkd.in/eBTB8reR #LegalTech #AI #Recruitment #BiasInAI #HumanRights #EmploymentLaw #LegalLiability #OntarioHumanRightsCode
Artificial Intelligence, Bias, and the Ontario Human Rights Code - MyOpenCourt
https://myopencourt.org
To view or add a comment, sign in
-
Embracing AI in the workplace can lead to efficiency gains and happier employees. Find out how AI assistants are revolutionising industries like marketing and legal! 🤖 #aiintheworkplace #futureofwork
🚀Is AI the key to unlocking efficiency or a threat to job security? According to Raconteur, 32% of UK employees fear job redundancy due to AI, but 28% see it as a productivity booster. Dive into the debate on implementing AI assistants responsibly and strategically 👉 https://bit.ly/4aM1Nkk #aiintheworkplace #futureofwork
How to implement AI assistants
raconteur.net
To view or add a comment, sign in
-
Check out our Co-Founder & Chief Data Scientist Brian DeAngelis’s thoughts on this important article on AI biases in hiring.
https://lnkd.in/eqJmYPs4 This great Bloomberg article by Leon Yin, Davey Alba and Leonardo Nicoletti shows the dangers of naive implementations of AI in hiring. The broader internet’s data reflects (and often augments) the biases behind human decision-making. It’s especially problematic in important domains like hiring. Some of the issues highlighted in this piece have easy resolutions (e.g. don’t let GPT see the names on resumes), but ensuring the overall fairness of data and approach is much harder to address. AdeptID uses AI to get more people into better jobs, faster. To support this goal, we’re focused on transparency, fairness and accountability. Check out the “Practices of Transparency” on our website.
OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias
bloomberg.com
To view or add a comment, sign in
-
Making AI work isn't just about coding; it's a whole different ball game. It's not just about tech stuff; you've got to think about philosophy, ethics, language, and even being able to chat like a pro. That's why having journalists on board could be a game-changer. They bring in fresh angles and know-how that can really shape up AI systems. https://lnkd.in/gUCrp2zv
Study reveals employers' lack of knowledge on implementing AI effectively
here.news
To view or add a comment, sign in
-
A total of 18 companies worked with BBB National Programs to develop a set of principles and policies for the self-regulation of artificial intelligence (AI) technologies in the hiring process. Eric Reicin, president and CEO of BBB National Programs, told Fox Business Network the effort came together due to the "need for business to do the right thing, the need for business to self-regulate when the rules are not necessarily clear in government." Watch the interview: https://lnkd.in/gVqbagS6 #AI #GenerativeAI #Hiring #Recruiting
Businesses look to self-regulate the use of AI in hiring
foxbusiness.com
To view or add a comment, sign in
-
The Future of Management: AI Managers 🤖 A growing trend: firms are not just adopting AI, but hiring AI Managers to drive strategy and ethical use. This shift highlights AI's central role in modern businesses. An exciting opportunity for consultants and a strategic move for firms! 💼🚀 #legaltech #legalinnovation #AIinManagement #HiringTrends #alabuzz
Law Firms Are Recruiting More AI Experts as Clients Demand 'More for Less'
insurancejournal.com
To view or add a comment, sign in
-
Government agencies at the federal, state and local level are looking to play catch-up on #AI regulations, prompting companies to self-regulate their use of AI in hiring. A working group composed of BBB National Programs' Center for Industry Self-Regulation and senior legal and privacy representatives from large, global employers came together to create AI Principles and Protocols on several key objectives Those include ensuring that AI systems are valid and reliable; promoting equitable outcomes with harmful bias managed; increasing inclusivity; facilitating compliance, transparency and accountability; and striving for systems that are safe, secure, resilient, explainable, interpretable and privacy-enhanced. Read more: https://hubs.la/Q01YNKYV0 #BiasManagement #MachineLearning #SelfRegulation
Businesses look to self-regulate the use of AI in hiring
foxbusiness.com
To view or add a comment, sign in
-
💡 AI Bias: A Liability Lurking in the Shadows A recent Computerworld article throws light on a ticking time bomb in the tech world - AI bias. AI tools, despite their potential, can mirror and magnify human biases. The kicker? Companies could be held liable for these biases, leading to legal and reputational fallout. This isn't just about dodging liability. It's about shaping AI that truly benefits everyone. We need diverse teams, regular bias audits, and transparency in decision-making. It's time to step up, face the challenge, and create AI that's fair and unbiased. https://lnkd.in/e-eecDrR #AI #ArtificialIntelligence #BiasInAI #EthicalAI #TechIndustry
AI tools could leave companies liable for anti-bias missteps
computerworld.com
To view or add a comment, sign in
-
Professor at California State University, East Bay. Director of the Women in Leadership Program (WIL). Ph.D., SHRM-SCP
#AI #writing Only 21% of firms have an AI policy. Our University doesn't have an AI policy and a preliminary search shows that most universities don't - leaving it up to faculty to negotiate their path forward. The main concerns raised in the workplace are inaccuracy, plagiarism and misrepresentation. But no one raised the issue of blandness, or the boredom that I face in reading AI content. I have been charmed by ChatGPT and used it to create job descriptions that are tedious to generate. However, I find an incredible blandness in student papers that they have "edited" with AI where one student's work merges into another in my mind. More dangerously in reviewing scholarship applications, I find that a number of them seem to have been generated or touched up with AI. Unfortunately, the students' applications don't stand out, since their individual voices were lost in the machine. I am curious to know if others see this happening.
Ready to Draft an Up-to-Date AI Policy? Target Top Risks
shrm.org
To view or add a comment, sign in