Smart Tools, Risky Outcomes? Ethical Challenges of AI in HR Unfolded with Tanvi Choksi, CHRO, Mahindra Holidays & India

Tanvi Choksi, CHRO, Mahindra Holidays & India

In an exclusive interaction with APAC Media and CXO Media, Tanvi Choksi, CHRO, Mahindra Holidays & India Ltd, explores at length on how to detect, address, and reverse silent attrition before it impacts performance and culture. As tech employees disengage without formally resigning, organisations face a hidden talent crisis.

How can organisations align the deployment of employee-facing tech tools (e.g., productivity trackers, collaboration software, AI assistants) with core ethical principles and organisational values?

I believe that aligning employee-facing tech tools with an organisation’s core values begins with a people-centric mindset. It’s essential that any technology introduced—whether it’s a productivity tracker, collaboration tool, or AI assistant—reflects a culture of trust, transparency, and empowerment. This includes communicating the purpose and use of each tool, safeguarding employee privacy, and ensuring the technology is designed to support and enable teams rather than to monitor or micromanage them. Involving employees from the start, valuing their feedback, and regularly assessing the impact of these tools are key steps. When implemented thoughtfully, technology can strengthen employee engagement, enhance well-being, and uphold the ethical foundation of the organisation.

To what extent should ethical considerations override business efficiency when deploying employee surveillance or analytics tools?

Ethical considerations must take precedence over pure efficiency when deploying employee surveillance or analytics tools. While business efficiency is important, it should never come at the cost of employee trust, dignity, and psychological safety. It’s vital to set boundaries around data collection, involve employees in the process, and ensure human oversight. Framing ethics and efficiency as opposing goals is misleading—in reality, ethical implementation builds the trust and engagement needed for long-term productivity and organisational health.

What are the best practices for distinguishing between permissible monitoring (e.g., for security or compliance) and intrusive surveillance?

The key to separating acceptable monitoring from intrusive surveillance is being clear, fair, and respectful. Monitoring is okay when it’s done openly, for valid reasons like security, compliance, or protecting company assets. It should only cover work-related activities during work hours and use the least invasive methods needed. Companies should avoid tracking personal data, ask for consent, especially if personal devices are involved, and keep all collected information secure. When monitoring is transparent, limited in scope, and focused on real business needs, it stays ethical and respectful.

What ethical concerns arise from the use of AI/ML tools in employee performance evaluations, hiring, or engagement scoring, and how are leading firms mitigating them?

Using AI and machine learning in areas like hiring, performance reviews, or engagement scoring raises key ethical concerns, especially around bias, lack of transparency, and privacy. If AI is trained on biased data, it can lead to unfair outcomes, especially in hiring. Many AI systems also don’t clearly explain how decisions are made, which makes it hard to check if they’re fair or legal. There’s also the risk of sensitive employee data being misused. Leading companies are tackling these issues by using diverse data, keeping humans involved in decisions, making AI systems more understandable, and regularly auditing for fairness. They also protect data through encryption and privacy rules. Organisations go further by building a culture where employees feel safe to raise concerns, ensuring that both the tech and the work environment stay ethical and trustworthy.

What incident response or grievance mechanisms should organisations have in place for misuse or breach of employee data via tech tools?

Organisations need strong systems to handle any misuse or breach of employee data. This includes having a clear action plan to quickly find, contain, and assess the issue, along with honest communication to affected employees. A cross-functional team—from IT, HR, legal, and communications—should manage the response, follow data protection laws, and review what went wrong to prevent it from happening again. It’s also important to provide simple and safe ways for employees to report data misuse, such as a secure email. There should be clear steps and timeframes for handling these reports. Regular training and communication make sure employees know how and where to raise their concerns. These systems should always respect privacy, limit unnecessary data use, and reflect the organisation’s values, because protecting employee data is not just a rule, it’s part of building a trustworthy and ethical workplace.

How do employees perceive the trade-off between personalisation and privacy in workplace technology? Are organisations measuring and responding to that perception?

Employees in India are becoming more aware of the balance between helpful personalised technology and their privacy at work. While personalised tools can make work easier and more efficient, many worry about their data being watched or misused. Employees’ sense of privacy strongly influences their loyalty and engagement with the organisation. Companies are starting to listen by creating clear data policies, following privacy laws, and building trust. They also use audits and employee feedback to improve how they handle data. Still, more needs to be done. Involving employees when making data rules and clearly explaining how their data is used can help build more trust and make personalised tools more acceptable.

What trends are emerging in ethical-by-design HR technology, and how are forward-thinking organisations adapting to them?

In 2025, ethical-by-design HR technology is becoming more important as companies focus on fairness, transparency, and employee well-being. New tools use AI and machine learning to improve hiring and performance reviews while reducing bias with diverse data and clear explanations. Leading organisations create ethics boards and assign people to oversee responsible AI use, making sure technology fits their values. HR tech is also helping with mental health, flexible work, and personalised growth. Companies are designing tools that respect human values and include everyone. As AI gets smarter and can make decisions on its own, organisations are making sure humans stay involved to keep things ethical.

How can industry bodies, regulators, and enterprises collaboratively build ethical standards for employee tech tool deployment in the age of AI?

In 2025, industry groups, regulators, and companies in India are working together to create ethical rules for using AI tools with employees. The government’s Ministry of Electronics and Information Technology (MeitY) introduced a voluntary ethics code for AI that focuses on fairness, transparency, and responsibility. They also set up the IndiaAI Safety Institute to support research and develop standards that consider India’s unique culture and economy. These efforts show India’s commitment to using AI at work in a way that respects employees and encourages innovation.

Sugandh Bahl Vij

Also Read –