Blog

With the rapid rise of artificial intelligence, governments and policymakers are racing to release their own proposals on regulating the technology. Major jurisdictions including the EU, US, UK, and China are all at various stages in introducing new AI rules. And while there are differences between these proposals, our global survey demonstrates that there are also clear commonalties in the underlying principles that organisations are likely to be required to comply with. These similarities provide a useful starting point for the development of global AI governance standards

PIKOM Position for Inputs to the Plan updates on the Malaysian Personal Data Protection Act 2010 with General Data Protection Regulation 2016

The Personal Data Protection Act 2010 (PDPA) of Malaysia is set to be aligned with the General Data Protection Regulation (GDPR) of the European Union (EU). This is in line with the government’s commitment to ensuring that Malaysia has a strong data protection regime that is in line with international standards.

The PDPA and the GDPR are both comprehensive pieces of legislation that aim to protect the privacy of individuals. However, there are some key differences between the two laws. For example, the GDPR applies to all organizations that process personal data of individuals in the EU, regardless of where the organization is located. The PDPA, on the other hand, only applies to organizations that are located in Malaysia or that offer goods or services to individuals in Malaysia.

The alignment of the PDPA with the GDPR is expected to take place in two phases. The first phase, which is already underway, will involve the amendment of the PDPA to bring it more in line with the GDPR. The second phase, which is expected to be completed in 2023, will involve the establishment of a new data protection authority in Malaysia.

The alignment of the PDPA with the GDPR is a positive development for Malaysia. It will help to ensure that Malaysian organizations are compliant with international standards for data protection, and it will also help to protect the privacy of individuals in Malaysia.

Here are some of the key changes that are expected to be made to the PDPA as part of the alignment process:

  • The definition of personal data will be expanded to include more types of information, such as biometric data and online identifiers.
  • The requirements for obtaining consent will be strengthened.
  • Individuals will be given more control over their personal data, such as the right to access, correct, and delete their personal data.
  • Organizations will be required to report data breaches to the data protection authority.
  • The penalties for non-compliance with the PDPA will be increased.

The alignment of the PDPA with the GDPR is a significant undertaking, but it is important for Malaysia to ensure that it has a strong data protection regime in place. The GDPR is one of the most comprehensive and stringent data protection laws in the world, and its alignment with the PDPA will help to protect the privacy of individuals in Malaysia.

General Comments on the Planned Update for PDPA

  1. Enforcement of PDPA

The Information Commissioner has a number of enforcement powers that they can use to ensure that organizations comply with information rights law under the amended PDPA. These powers include:

  • Information notices: The Information Commissioner can issue an information notice to an organization requiring them to provide information about their data processing activities.
  • Enforcement notices: The Information Commissioner can issue an enforcement notice to an organization requiring them to take specific steps to comply with information rights law.
  • Penalty notices: The Information Commissioner can issue a penalty notice to an organization for failing to comply with information rights law. The maximum penalty for a breach of the Data Protection Act 2018 is £17.5 million, or 4% of the organization’s global turnover, whichever is higher.
  • Prosecution: The Information Commissioner can bring a prosecution against an organization for failing to comply with information rights law.

The Information Commissioner will usually use their enforcement powers as a last resort. They will first try to resolve the issue with the organization through informal means, such as providing advice and guidance. However, if the organization is unwilling to comply with the law, the Information Commissioner may have to take enforcement action.

Here are some examples of enforcement actions that the Information Commissioner has taken in recent years:

  • In 2021, the Information Commissioner issued a penalty notice to TikTok for £12.7 million for failing to comply with the Data Protection Act 2018. The Information Commissioner found that TikTok had failed to put in place adequate measures to protect the personal data of children under the age of 13.
  • In 2019, the Information Commissioner issued a penalty notice to British Airways for £20 million for failing to protect the personal data of its customers. The Information Commissioner found that British Airways had failed to put in place adequate security measures to protect the personal data of its customers, which resulted in a data breach that affected over 500,000 people.

The Information Commissioner’s enforcement actions send a clear message to organizations that they must comply with information rights law. If organizations fail to comply with the law, they will face serious consequences.

  • Citizen and Private Sector Consultation

Citizen and private sector consultation are a process of engaging with citizens and businesses to gather their input on a wide range of issues, from public policy to economic development. It is a critical tool for governments and businesses to ensure that they are making decisions that are in the best interests of all stakeholders.

There are many benefits to citizen and private sector consultation. It can help to:

  • Improve the quality of decision-making by ensuring that all perspectives are considered.
  • Build trust between governments and citizens, and between businesses and the communities they operate in.
  • Increase public awareness of important issues.
  • Foster innovation by tapping into the knowledge and expertise of citizens and businesses.
  • Legitimize decisions by ensuring that they have the support of the people they affect.

There are a variety of ways to conduct citizen and private sector consultation. Some common methods include:

  • Public meetings: These meetings provide an opportunity for citizens and businesses to come together and discuss issues face-to-face.
  • Online surveys: These surveys can be used to gather feedback from a large number of people quickly and easily.
  • Focus groups: These groups are made up of a small number of people who are selected to represent a cross-section of the population. They are led by a facilitator who helps to guide the discussion.
  • Key informant interviews: These interviews are conducted with individuals who have specialized knowledge or experience on a particular issue.

The best way to conduct citizen and private sector consultation will vary depending on the issue at hand and the resources available. However, it is important to ensure that the process is inclusive and that all voices are heard.

Here are some tips for conducting successful citizen and private sector consultation:

  • Be clear about the purpose of the consultation. What do you hope to achieve?
  • Identify the key stakeholders who should be involved.
  • Tailor the consultation methods to the specific issue and audience.
  • Provide clear and concise information about the issue.
  • Make it easy for people to participate.
  • Be respectful of all viewpoints.
  • Summarize the results of the consultation and take action on the feedback.

Citizen and private sector consultation are an essential tool for governments and businesses that want to make informed decisions and build trust with the people they serve. By following these tips, you can ensure that your consultation process is successful.

Comments on Known Pillars for the PDPA Updates

  1. The Right to Data Portability

The right to data portability is one of the eight fundamental rights that individuals have under the GDPR. It allows individuals to obtain the personal data that they have provided to a controller in a structured, commonly used and machine-readable format. Individuals can then transmit this data to another controller, or request that the controller transmit it directly to another controller, without hindrance from the controller to which the personal data have been provided.

The right to data portability applies to personal data that individuals have provided to a controller, and that the controller is processing on the basis of the individual’s consent or in order to perform a contract with the individual. The right does not apply to personal data that is processed for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller.

To exercise their right to data portability, individuals must make a request to the controller. The request must be made in writing and must specify the personal data that the individual wants to receive. The controller must provide the personal data within one month of receiving the request.

The right to data portability is a powerful tool that allows individuals to control their personal data. It can be used to switch to a new service provider, to take advantage of new services, or to simply have a copy of their personal data for their own use.

Here are some examples of how the right to data portability can be used:

  • An individual can use the right to data portability to switch to a new email provider. They can download their email history from their old provider and import it into their new provider.
  • An individual can use the right to data portability to take advantage of new services that offer personalized recommendations. They can download their purchase history from an online retailer and use it to create a profile that can be used by other services to make recommendations.
  • An individual can use the right to data portability to simply have a copy of their personal data for their own use. This could be useful if they want to keep a record of their personal data for their own records, or if they want to share it with a third party.

The right to data portability is a valuable tool that can be used to control personal data. It is important to be aware of this right and to use it when appropriate.

  • Imposing Certain Obligations on Data Processors

Under the GDPR, a data processor is a natural or legal person, public authority, agency or other body that processes personal data on behalf of a controller. This means that the data processor does not determine the purposes and means of the processing, but rather carries out the processing activities in accordance with the instructions of the controller.

Data processors have a number of responsibilities under the GDPR, including:

  • Processing personal data only on the instructions of the controller: Data processors must only process personal data in accordance with the instructions of the controller. They must not process personal data for any other purpose, without the prior consent of the controller.
  • Ensuring the security of personal data: Data processors must take appropriate technical and organizational measures to ensure the security of personal data. This includes measures to prevent unauthorized access, use, disclosure, alteration or destruction of personal data.
  • Keeping records of processing activities: Data processors must keep records of all processing activities carried out on behalf of the controller. These records must include information about the purposes of the processing, the categories of personal data processed, the recipients of the personal data, and the transfer of personal data to third countries or international organizations.
  • Reporting data breaches to the controller: Data processors must report data breaches to the controller without undue delay. The data breach report must include information about the nature of the data breach, the number of individuals affected, the likely consequences of the data breach, and the measures taken to mitigate the impact of the data breach.
  • Providing assistance to the controller: Data processors must provide assistance to the controller in responding to data subject requests, exercising data subject rights, and conducting data protection impact assessments.

Data processors that fail to comply with their obligations under the GDPR may be subject to fines of up to €20 million or 4% of their global annual turnover, whichever is higher.

Here are some examples of organizations that are typically considered to be data processors:

  • Cloud computing providers
  • IT service providers
  • Website hosting providers
  • Marketing agencies
  • Research organizations
  • Payment processors

If you are an organization that processes personal data on behalf of another organization, you are a data processor and you have a number of responsibilities under the GDPR. It is important to understand these responsibilities and to take steps to comply with them.

  • Data Protection Officer (DPO)

The GDPR does not specify any specific qualifications for a DPO. However, the GDPR does require that DPOs have:

  • Expertise in data protection law and practice: DPOs must have a deep understanding of data protection law and practice, including the GDPR. They must be able to advise organizations on how to comply with data protection law and to implement data protection measures.
  • In-depth understanding of the organization’s data processing activities: DPOs must have a detailed understanding of the organization’s data processing activities. They must be able to identify and assess the risks to personal data that arise from the organization’s activities.
  • Understanding of information technologies and data security: DPOs must have a good understanding of information technologies and data security. They must be able to advise organizations on how to protect personal data from unauthorized access, use, disclosure, alteration or destruction.
  • Thorough knowledge of the organization and the business sector in which it operates: DPOs must have a thorough knowledge of the organization and the business sector in which it operates. They must be able to understand the organization’s needs and to advise it on how to comply with data protection law in a way that is practical and cost-effective.
  • Ability to promote a data protection culture within the organization: DPOs must be able to promote a data protection culture within the organization. They must be able to raise awareness of data protection issues among employees and to help the organization to implement data protection measures.

In addition to these qualifications, the GDPR also recommends that DPOs have:

  • At least three years of experience in data protection law and practice: This experience can be gained in a variety of roles, such as a lawyer, consultant, or auditor.
  • A recognized qualification in data protection: There are a number of recognized qualifications in data protection, such as the Certified Information Privacy Professional (CIPP/E) and the European Data Protection Supervisor (EDPS) Certification.

The GDPR does not require DPOs to be certified, but it does recommend that they have a recognized qualification in data protection. This is because certified DPOs have demonstrated their knowledge and skills in data protection law and practice.

If you are considering becoming a DPO, it is important to ensure that you have the necessary qualifications and experience. You can also get certified in data protection to demonstrate your knowledge and skills to potential employers.

  • Outbound Data Transfer

The GDPR restricts the transfer of personal data outside of the European Economic Area (EEA) to countries that have been deemed to have an adequate level of data protection. This is to ensure that the level of protection of individuals afforded by the GDPR is not undermined.

There are a number of ways to transfer personal data outside of Malaysia in compliance with the GDPR. These include:

  • Transfers to countries with an adequacy decision: The European Commission has made adequacy decisions for a number of countries, which means that these countries are considered to have an adequate level of data protection. This means that personal data can be transferred to these countries without any additional safeguards being put in place.
  • Transfers to countries with binding corporate rules (BCRs): BCRs are a set of internal rules that are adopted by organizations that operate in multiple jurisdictions. BCRs provide a framework for transferring personal data from the EEA to countries that do not have an adequacy decision.
  • Transfers with standard contractual clauses (SCCs): SCCs are a set of standard contractual clauses that can be used to transfer personal data from the EEA to countries that do not have an adequacy decision. SCCs provide a number of safeguards to protect the personal data that is transferred.
  • Transfers to countries with a legal basis: In some cases, it may be possible to transfer personal data to a country that does not have an adequacy decision if there is a legal basis for the transfer. This could be the case if the transfer is necessary for the performance of a contract between the data subject and the controller, or if the transfer is necessary for the legitimate interests of the controller.

It is important to note that the GDPR does not allow for the transfer of personal data to countries that are subject to a generalized surveillance regime. This is because such countries are considered to pose a high risk to the fundamental rights of individuals.

If you are considering transferring personal data outside of the EEA, it is important to carefully consider the options available to you and to ensure that you comply with the GDPR. You should also seek advice from a data protection expert if you are unsure about the best way to proceed.

  • Data Breach Notification

Under the GDPR, organizations must notify the supervisory authority and the affected individuals without undue delay if there has been a personal data breach that is likely to result in a high risk to the rights and freedoms of natural persons.

The notification to the supervisory authority must include the following information:

  • The name and contact details of the controller
  • The nature of the personal data breach
  • The number of data subjects affected
  • The likely consequences of the personal data breach
  • The measures taken or proposed to be taken to address the personal data breach, including measures to mitigate the possible negative effects

The notification to the affected individuals must include the following information:

  • The name and contact details of the controller
  • The nature of the personal data breach
  • The likely consequences of the personal data breach
  • The measures taken or proposed to be taken to address the personal data breach
  • Information on how the affected individuals can obtain further information

The GDPR does not specify a timeframe for notifying the supervisory authority and the affected individuals of a personal data breach. However, it states that notification must be made without undue delay. This means that organizations should notify the supervisory authority and the affected individuals as soon as possible after becoming aware of the personal data breach.

If the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons, the organization must also take the following measures:

  • Inform the affected individuals of the personal data breach without undue delay
  • Take all reasonable steps to mitigate the possible negative effects of the personal data breach
  • Report the personal data breach to the competent authorities in the Member State where the personal data breach occurred

The GDPR imposes strict penalties for organizations that fail to comply with the data breach notification requirements. Organizations can be fined up to €20 million or 4% of their global annual turnover, whichever is higher.

It is important for organizations to have a plan in place for responding to personal data breaches. This plan should include procedures for:

  • Detecting and reporting personal data breaches
  • Notifying the supervisory authority and the affected individuals
  • Mitigating the possible negative effects of the personal data breach
  • Reporting the personal data breach to the competent authorities

By having a plan in place, organizations can help to ensure that they are prepared to respond to personal data breaches in a timely and effective manner.

Additional Inputs to the PDPA Update

  1. The PDPA should be applied to Federal Government, State Government, and entities conducting non-commercial activity.

I agree with the statement that the PDPA should be applied to Federal Government, State Government, and entities conducting non-commercial activity.

The PDPA is a comprehensive piece of legislation that aims to protect the privacy of individuals. However, it currently only applies to organizations that are located in Malaysia or that offer goods or services to individuals in Malaysia. This means that the PDPA does not apply to the Federal Government, the State Government, or entities conducting non-commercial activity.

I believe that the PDPA should be applied to all organizations, regardless of their location or the nature of their activities. This is because all organizations collect and process personal data, and all individuals have a right to privacy. The Federal Government, the State Government, and entities conducting non-commercial activity are no different. They should all be held accountable for their handling of personal data.

There are a number of reasons why the PDPA should be applied to the Federal Government, the State Government, and entities conducting non-commercial activity. First, these organizations collect and process a significant amount of personal data. For example, the Federal Government collects personal data on all Malaysian citizens, including their names, addresses, and birth dates. The State Governments also collect personal data on their citizens, and entities conducting non-commercial activity often collect personal data on their customers and employees.

Second, these organizations have a great deal of power over individuals. The Federal Government can make laws, the State Governments can enforce laws, and entities conducting non-commercial activity can control access to goods and services. This power gives these organizations the potential to misuse personal data, and it is important to have safeguards in place to protect individuals.

Third, these organizations are often trusted by individuals. Individuals often believe that the Federal Government, the State Governments, and entities conducting non-commercial activity will protect their personal data. This trust is important, and it should not be abused.

I believe that the PDPA is a good piece of legislation, but it needs to be applied to all organizations, regardless of their location or the nature of their activities. This will help to protect the privacy of individuals and it will ensure that all organizations are held accountable for their handling of personal data.

  • Processing of personal data in cloud computing.

The processing of personal data in cloud computing is a complex issue. On the one hand, cloud computing can offer a number of benefits for organizations that process personal data. For example, cloud computing can help organizations to:

  • Save money on IT costs
  • Improve scalability and flexibility
  • Increase agility and responsiveness
  • Reduce the need for on-premises infrastructure

On the other hand, there are also a number of risks associated with the processing of personal data in cloud computing. For example, cloud computing providers may have access to personal data, and they may be located in countries with weaker data protection laws.

The PDPA applies to the processing of personal data in cloud computing. This means that organizations that process personal data in the cloud must comply with the PDPA’s requirements.

Here are some of the key PDPA requirements that organizations need to consider when processing personal data in the cloud:

  • Consent: Organizations must obtain consent from individuals before they can process their personal data.
  • Security: Organizations must take appropriate technical and organizational measures to protect personal data from unauthorized access, use, disclosure, alteration, or destruction.
  • Transfers of personal data: Organizations must only transfer personal data to countries that have adequate data protection laws.
  • Data breach notification: Organizations must notify the PDPA if there is a data breach that is likely to result in a high risk to the rights and freedoms of individuals.

Organizations that process personal data in the cloud should carefully consider the PDPA requirements and take steps to comply with them. This will help to protect the privacy of individuals and to avoid penalties from the PDPA.

Here are some additional tips for organizations that process personal data in the cloud:

  • Choose a cloud provider that has a strong commitment to data protection.
  • Review the cloud provider’s privacy policy and terms of service.
  • Encrypt personal data before it is transferred to the cloud.
  • Use strong passwords and multi-factor authentication.
  • Monitor the cloud environment for security threats.
  • Test and audit the cloud environment regularly.
  • Have a plan for responding to data breaches.

By following these tips, organizations can help to protect the privacy of individuals and to comply with the PDPA when processing personal data in the cloud.

  • Disclosure of data to government agencies (regulators, law enforcement, etc.)

The GDPR contains a number of exemptions that organizations can rely on to avoid compliance with certain provisions of the regulation. These exemptions are designed to ensure that the GDPR does not place an undue burden on organizations and that it does not interfere with legitimate activities.

Some of the most common GDPR exemptions include:

  • Processing personal data for archiving purposes in the public interest or in the exercise of official authority vested in the controller: This exemption applies to organizations that process personal data for archiving purposes in the public interest or in the exercise of official authority vested in the controller. For example, this exemption could be used by government agencies that process personal data for the purposes of public records management.
  • Processing personal data for scientific or historical research purposes or statistical purposes: This exemption applies to organizations that process personal data for scientific or historical research purposes or statistical purposes. This exemption is subject to a number of conditions, including the requirement that the processing must be carried out in accordance with data protection principles and that the results of the research must not be used to identify individuals.
  • Processing personal data that is made public by the data subject: This exemption applies to organizations that process personal data that has been made public by the data subject. For example, this exemption could be used by organizations that publish personal data on social media or in news articles.
  • Processing personal data that is necessary for the performance of a contract between the data subject and the controller or in order to take steps at the request of the data subject prior to entering into a contract: This exemption applies to organizations that process personal data that is necessary for the performance of a contract between the data subject and the controller or in order to take steps at the request of the data subject prior to entering into a contract. For example, this exemption could be used by organizations that process personal data to provide products or services to customers.

It is important to note that the GDPR exemptions are not exhaustive and that there may be other exemptions that apply in specific circumstances. If you are unsure whether an exemption applies to your organization, you should seek advice from a data protection expert.

In addition to the exemptions listed above, the GDPR also contains a number of derogations that organizations can rely on to avoid compliance with certain provisions of the regulation. Derogations are different from exemptions in that they are not absolute and can only be used in certain circumstances.

Some of the most common GDPR derogations include:

  • Derogations for processing personal data in the public interest: This derogation allows organizations to process personal data in the public interest without the consent of the data subject, provided that the processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller.
  • Derogations for processing personal data for the purposes of legitimate interests: This derogation allows organizations to process personal data for their legitimate interests, provided that the processing does not adversely affect the interests or fundamental rights of the data subject.
  • Derogations for processing personal data in the context of employment: This derogation allows organizations to process personal data of their employees in the context of employment, provided that the processing is necessary for the purposes of the employment relationship.

It is important to note that GDPR derogations are subject to a number of conditions and must be used in a proportionate manner. If you are unsure whether a derogation applies to your organization, you should seek advice from a data protection expert.

The GDPR exemptions and derogations are complex and it is important to carefully consider all of the options available to you before relying on them. If you are unsure whether an exemption or derogation applies to your organization, you should seek advice from a data protection expert.

  • Simplify and standardize consent request for data subjects.
  1. Consent is a key principle of the GDPR. It means that organizations must obtain the consent of individuals before they can process their personal data. Consent must be freely given, specific, informed, and unambiguous. It must also be given for a specific purpose.

There are a few exceptions to the consent requirement under the GDPR. For example, organizations can process personal data without consent if it is necessary for the performance of a contract, or if it is necessary for the purposes of legitimate interests. However, organizations must always weigh the interests of the individual against their own interests when relying on an exception to the consent requirement.

There are a number of things that organizations can do to ensure that they obtain valid consent from individuals. These include:

  • Making sure that the consent is freely given: Individuals must be able to give their consent freely, without any pressure or coercion.
  • Making sure that the consent is specific: Individuals must be able to understand what they are consenting to. The consent must be specific to the processing activity that is being carried out.
  • Making sure that the consent is informed: Individuals must have all of the information they need to make an informed decision about whether to consent. This includes information about the purpose of the processing, the types of personal data that will be processed, and the rights of the individual.
  • Making sure that the consent is unambiguous: The consent must be clear and unambiguous. Individuals must be able to understand that they are giving their consent to the processing of their personal data.

If you are an organization that processes personal data, you should carefully consider the consent requirements under the GDPR and take steps to ensure that you obtain valid consent from individuals.

Here are some additional tips for obtaining valid consent under the GDPR:

  • Use clear and plain language: The consent must be written in clear and plain language that is easy for individuals to understand.
  • Use opt-in consent: Individuals should be able to opt in to the processing of their personal data, rather than having to opt out.
  • Make it easy to withdraw consent: Individuals should be able to withdraw their consent easily and at any time.
  • Keep records of consent: Organizations should keep records of the consent that they have obtained from individuals. This will help to demonstrate that they have complied with the GDPR’s consent requirements.
  • The GDPR sets out specific rules for the processing of personal data of children under the age of 16. Under the GDPR, children are considered to be “data subjects” and their personal data is protected by the same rights as the personal data of adults. However, the GDPR also recognizes that children may not be as aware of their data protection rights as adults, and may be more vulnerable to harm from the processing of their personal data.

As a result, the GDPR imposes additional requirements on organizations that process the personal data of children. These requirements include:

  • Obtaining parental consent: Organizations must obtain the consent of a parent or legal guardian before processing the personal data of a child under the age of 16. This consent must be given in a clear and explicit manner, and it must be specific to the processing activity that is being carried out.
  • Providing information to children: Organizations must provide children with clear and age-appropriate information about the processing of their personal data. This information must be provided in a way that children can understand.
  • Giving children the right to withdraw consent: Children have the right to withdraw their consent to the processing of their personal data at any time. Organizations must make it easy for children to withdraw their consent, and they must stop processing the personal data of the child once the consent has been withdrawn.
  • Taking additional safeguards: Organizations must take additional safeguards when processing the personal data of children. These safeguards may include:
    • Obtaining parental consent for specific processing activities that are considered to be riskier, such as the processing of sensitive personal data or the use of personal data for marketing purposes.
    • Providing children with more control over their personal data, such as the right to access, correct, or delete their personal data.
    • Making it easier for children to withdraw their consent.

The GDPR requirements for the processing of children’s personal data are designed to protect children from harm and to ensure that their rights are respected. If you are an organization that processes the personal data of children, you should carefully consider these requirements and take steps to comply with them.

  • Data Privacy Impact Assessment (DPIA).

A data privacy impact assessment (DPIA) is a process that organizations can use to identify and assess the risks to personal data arising from their processing activities. The DPIA is a key requirement of the GDPR and it is designed to help organizations comply with the GDPR’s data protection principles.

The DPIA process should be carried out before any new processing activity is undertaken, or when there are significant changes to an existing processing activity. The DPIA should be documented and it should be kept up-to-date.

The DPIA should include the following steps:

  1. Identify the purpose of the processing activity: The first step in the DPIA process is to identify the purpose of the processing activity. This will help to determine the types of personal data that will be processed and the risks that may arise from the processing.
  2. Identify the data subjects: The next step is to identify the data subjects whose personal data will be processed. This will help to assess the potential impact of the processing on the data subjects.
  3. Identify the personal data that will be processed: The DPIA should identify the types of personal data that will be processed. This will help to assess the sensitivity of the data and the potential risks that may arise from the processing.
  4. Identify the risks to the data subjects: The DPIA should identify the risks to the data subjects arising from the processing activity. This includes risks to the confidentiality, integrity, and availability of the personal data.
  5. Evaluate the risks: The DPIA should evaluate the risks to the data subjects and determine whether the risks are high, medium, or low.
  6. Implement measures to mitigate the risks:** The DPIA should implement measures to mitigate the risks to the data subjects. These measures should be proportionate to the risks and they should be documented.
  7. Monitor and review the DPIA:** The DPIA should be monitored and reviewed on a regular basis to ensure that it is still accurate and up-to-date.

The DPIA is a complex process, but it is an important tool that organizations can use to comply with the GDPR and to protect the privacy of data subjects.

Here are some of the benefits of conducting a DPIA:

  • It can help organizations to identify and assess the risks to personal data arising from their processing activities.
  • It can help organizations to implement measures to mitigate the risks to personal data.
  • It can help organizations to comply with the GDPR’s data protection principles.
  • It can help organizations to demonstrate their commitment to data protection to data subjects and regulators.

If you are considering conducting a DPIA, it is important to seek advice from a data protection expert. A data protection expert can help you to understand the DPIA requirements and to conduct the DPIA in a compliant manner.

  • Civil litigation against data user

The GDPR allows individuals to bring civil litigation against organizations that have violated their data protection rights. This is a powerful tool that individuals can use to hold organizations accountable for their actions and to seek compensation for the harm that they have suffered.

To bring a civil litigation case under the GDPR, individuals must first make a complaint to the supervisory authority in the Member State where the organization is located. The supervisory authority will then investigate the complaint and decide whether to take enforcement action against the organization. If the supervisory authority does not take enforcement action, individuals may then bring a civil litigation case against the organization in the national courts.

Individuals who bring civil litigation cases under the GDPR can seek compensation for a range of damages, including:

  • Material damages: This includes financial losses that individuals have suffered as a result of the data breach, such as the cost of replacing stolen credit cards or the cost of repairing damage to a computer system.
  • Non-material damages: This includes emotional distress, loss of privacy, and damage to reputation.

The amount of compensation that individuals can recover in civil litigation cases under the GDPR is not capped. This means that individuals can potentially recover very large sums of money if they have suffered significant harm as a result of a data breach.

The GDPR also allows individuals to seek injunctive relief in civil litigation cases. This means that individuals can ask the court to order the organization to stop processing their personal data or to take other steps to protect their data.

Civil litigation cases under the GDPR can be complex and expensive. However, they are a powerful tool that individuals can use to hold organizations accountable for their actions and to seek compensation for the harm that they have suffered.

Here are some examples of civil litigation cases that have been brought under the GDPR:

  • In 2021, a French court ordered Google to pay €100,000 to a woman who had her personal data leaked in a data breach.
  • In 2020, a German court ordered Facebook to pay €600,000 to a man who had his personal data leaked in a data breach.
  • In 2019, a British court ordered British Airways to pay £183 million to customers who had their personal data leaked in a data breach.

These cases demonstrate that the GDPR can be a powerful tool for individuals who have suffered harm as a result of a data breach. If you have had your personal data breached, you should consider bringing a civil litigation case against the organization that was responsible for the breach.

Additional and/or Undocumented Points made during the meeting Comments

Accountability Framework

 What is accountability?

Accountability is one of the key principles in data protection law – it makes you responsible for complying with the legislation and says that you must be able to demonstrate your compliance.

It’s a real opportunity to show that you set high standards for privacy and lead by example to promote a positive attitude to data protection across your organisation.

Accountability enables you to minimise the risks of what you do with personal data by putting in place appropriate and effective policies, procedures and measures. These must be proportionate to the risks, which can vary depending on the amount of data being handled or transferred, its sensitivity and the technology you use.

Regulators, business partners and individuals need to see that you are managing personal data risks if you want to secure their trust and confidence. This can enhance your reputation and give you a competitive edge, helping your business to thrive and grow.

How can I use the framework?

The framework is an opportunity for you to assess your organisation’s accountability. Depending on your circumstances, you may use it in different ways. For example, you may want to:

  • create a comprehensive privacy management programme;
  • check your existing practices against the ICO’s expectations;
  • consider whether you could improve existing practices, perhaps in specific areas;
  • understand ways to demonstrate compliance;
  • record, track and report on progress; or
  • increase senior management engagement and privacy awareness across your organisation.
  • The framework is divided into 10 categories, Leadership and oversight, Policies and procedures, Training and awareness, Individuals’ rights, Transparency, Records of processing and lawful basis, Contracts and data sharing, Risks and data protection impact assessments (DPIAs), Records management and security, and Breach response and monitoring complemented with Case studies.

For example, ‘Leadership and oversight’. Selecting a category will display our key expectations and a bullet-pointed list of ways you can meet our expectations. These are the most likely ways to meet our expectations, but they are not exhaustive. You may meet our expectations in slightly different or unique ways.

You can demonstrate the ways you are meeting our expectations with documentation, but accountability is also about what you actually do in practice so you should also review how effective the measures are.

Accountability is not about ticking boxes. While there are some accountability measures that you must take, such as conducting a data protection impact assessment for high-risk processing, there isn’t a ‘one size fits all’ approach.

You will need to consider your organisation and what you are doing with personal data in order to manage personal data risks appropriately. As a general rule, the greater the risk, the more robust and comprehensive the measures in place should be.

1. EU General Data Protection Regulation

2. What’s Data Privacy Law in Your Country?

3. How to Conduct a Data Protection Impact Assessment

4. An Interview with MDEC Discussing Artificial Intelligence in Malaysia

5. ICO Publishes AI Auditing Framework Draft Guidance

6. Explaining AI Decisions

7. Privacy Statement


EU General Data Protection Regulation

Chapter 1 General Provisions

  • Article 1 

Subject-matter and objectives 

This Regulation contains rules on processing personal data and the free movement of personal data to protect the fundamental rights and freedoms of natural persons and their right to protection of personal data 

  • Article 2 

Material Scope  

This Regulation applies to the processing of personal data which form part of a filing system. 

  • Article 3 

Territorial Scope  

This Regulation applies to controllers and processors in the Union and controllers or processors not in the Union if they process personal data of data subjects who live in the Union. 

  • Article 4 

Definitions  

This Article contains 26 essential definitions. 

Chapter 2  Principles

This chapter outlines the rules for processing and protecting personal data. 

  • Article 5 

Principles relating to processing of personal data  

Personal data shall be processed lawfully, fairly, and in a transparent manner; collected for specified, explicit, and legitimate purposes; be adequate, relevant, and limited to what is necessary; etc. 

  • Article 6 

Lawfulness of processing 

There are six reasons that make processing lawful if at least one is true (e.g. data subject has given consent, processing is necessary for the performance of a contract, etc). 

  • Article 7 

Conditions for Consent  

When processing is based on consent, whoever controls the personal data must prove consent to the processing, and the data subject can withdraw consent at any time. 

  • Article 8 

Conditions applicable to child’s consent in relation to information societal services 

Information society services can process personal data of a child if the child is over 16. If the child is under 16, the legal guardian must consent. 

  • Article 9 

Processing special categories of personal data  

Processing personal data revealing race, political opinions, religion, philosophy, trade union membership, genetic data, health, sex life, and sexual orientation is prohibited unless the subject gives explicit consent, it’s necessary to carry out the obligations of the controller, it’s necessary to protect the vital interests of the data subject, etc. 

  • Article 10 

Processing personal data related to criminal convictions and offenses  

Processing personal data related to criminal convictions can only be carried out by an official authority or when Union or Member State law authorizes the processing. 

  • Article 11 

Processing which does not require identification  

The controller does not need to get or process additional information to identify the data subject if the purpose for which the controller processes data does not require the identification of a data subject. 

Chapter 3 Rights of The Data Subjects

This chapter discusses the rights of the data subject, including the right to be forgotten, right to rectification, and right to restriction of processing. 

Section 1 Transparency and modalities 

  • Article 12 

Transparent information, communications, and modalities for the exercise of the rights of the data subject  

When necessary, the controller must provide information in a concise, transparent, intelligible and easily accessible form, using clear and plain language, and the controller needs to provide information on action taken on request by and to the data subject within one month. Page Break 

Section 2 Information and access to personal data 

  • Article 13 

Information to be provided where personal data are collected from the data subject  

When personal data is collected from the data subject, certain information needs to be provided to the data subject. 

  • Article 14 

Information to provide to the data subject when personal data has not been obtained from data subject  

When personal data is not obtained from the data subject, the controller has to provide the data subject with certain information. 

  • Article 15 

Right of access by the data subject  

The data subject has a right to know whether their personal data is being processed, what data is being processed, etc. 

Section 3 Rectification and Erasure 

  • Article 16 

Right to rectification 

The data subject can require the controller to rectify any inaccurate information immediately. 

  • Article 17 

Right to be forgotten 

In some cases, the data subject has the right to make the controller erase all personal data, with some exceptions. 

  • Article 18 

Right to restriction of processing 

In some cases, the data subject can restrict the controller from processing. 

  • Article 19 

Notification obligation regarding rectification or erasure of personal data or restriction of processing 

The controller has to notify recipients of personal data if that data is rectified or erased. 

  • Article 20 

Right to data portability  

The data subject can request to receive their personal data and give it to another controller or have the current controller give it directly to another controller. 

Section 4 Right to Object and Automated Individual decision-making 

  • Article 21 

Right to Object  

Data subjects have the right to object to data processing on the grounds of his or her personal situation. 

  • Article 22

Automated individual decision-making, including profiling

Data subjects have the right not to be subjected to automated individual decision-making, including profiling.

Section 5 Restrictions

  • Article 23

Restrictions 

Union or Member State law can restrict the rights in Articles 12 through 22 through a legislative measure.

Chapter 4 Controller and Processor

This chapter covers the general obligations and necessary security measures of data controllers and processors, as well as data protection impact assessments, the role of the data protection officer, codes of conduct, and certifications. 

Section 1 General Obligations 

  • Article 24 

Responsibility of the Controller 

The controller has to ensure that processing is in accordance with this Regulation. 

  • Article 25 

Data protection by design and by default 

Controllers must implement data protection principles in an effective manner and integrate necessary safeguards to protect rights of data subjects. 

  • Article 26 

Joint Controllers 

When there are two or more controllers, they have to determine their respective responsibilities for compliance. 

  • Article 27 

Representatives of controllers or processors not established in the Union  

When the controller and processor are not in the Union, in most cases they have to establish a representative in the Union.  

  • Article 28 

Processor  

When processing is carried out on behalf of a controller, the controller can only use a processor that provides sufficient guarantees to implement appropriate technical and organizational measures that will meet GDPR requirements. 

  • Article 29 

Processing under the authority of the controller or processor 

Processors can only process data when instructed by the controller. 

  • Article 30 

Records of Processing Activities 

Each controller or their representatives needs to maintain a record of processing activities and all categories of processing activities. 

  • Article 31 

Cooperation with the supervisory authority  

The controller and processor have to cooperate with supervisory authorities. 

 Section 2 Security of personal data 

  • Article 32 

Security of processing  

The controller and processor must ensure a level of security appropriate to the risk. 

  • Article 33 

Notification of a personal data breach to the supervisory authority 

In the case of a breach, the controller has to notify the supervisory authority within 72 hours, unless the breach is unlikely to result in risk to people. And the processor needs to notify the controller immediately. 

  • Article 34 

Communication of a personal data breach to the data subject 

When a breach is likely to cause risk to people, the controller has to notify data subjects immediately. 

Section 3 Data protection impact assessment and prior consultation 

  • Article 35 

Data protection impact assessment 

When a type of processing, especially with new technologies, is likely to result in a high risk for people, an assessment of the impact of the processing needs to be done.Page Break 

  • Article 36:  

Prior consultation  

The controller needs to consult the supervisory authority when an impact assessment suggests there will be high risk if further action is not taken. The supervisory authority must provide advice within eight weeks of receiving the request for consultation. 

Section 4 Data protection officer 

  • Article 37 

Designation of the data protection officer 

The controller and processor must designate a data protection officer (DPO) if processing is carried out by a public authority, processing operations require the systematic monitoring of data subjects, or core activities of the controller or processor consist of processing personal data relating to criminal convictions or on a large scale of special categories of data pursuant to Article 9. 

  • Article 38 

Position of the data protection officer  

The DPO must be involved in all issues which relate to the protection of personal data. The controller and processor must provide all necessary support for the DPO to do their tasks and not provide instruction regarding those tasks. 

  • Article 39 

Tasks of the data protection officer 

The DPO must inform and advise the controller and processor and their employees of their obligations, monitor compliance, provide advice, cooperate with the supervisory authority, and act as the contact point for the supervisory authority. 

Section 5 Codes of conduct and certification 

  • Article 40 

Codes of conduct  

Member States, the supervisory authorities, the Board, and the Commission shall encourage the drawing up of codes of conduct intended to contribute to the proper application of the GDPR. 

  • Article 41 

Monitoring of approved codes of conduct  

A body with adequate expertise in the subject-matter and is accredited to do so by the supervisory authority can monitor compliance with a code of conduct. Page Break 

  • Article 42 

Certification  

Member States, the supervisory authorities, the Board, and the Commission shall encourage the establishment of data protection certification mechanisms to demonstrate compliance. 

  • Article 43 

Certification bodies  

 Certification bodies accredited by Member States can issue and renew certifications. 

Chapter 5 Transfers of Personal Data to Third Countries or International Organisations 

This chapter provides the rules for transferring personal data that is undergoing or will undergo processing outside of the Union.

  • Article 44

General principle for transfers 

Controllers and processors can only transfer personal data if they comply with the conditions in this chapter.

  • Article 45

Transfers on the basis of an adequacy decision

A transfer of personal data to a third country or international organization can occur if the Commission has decided the country or organization can ensure an adequate level of protection.

  • Article 46

Transfers subject to appropriate safeguards 

If the Commission has decided it can’t ensure an adequate level of protection, a controller or processor can transfer personal data to a third country or organization if it has provided appropriate safeguards.

  • Article 47

Binding Corporate rules

The supervisory authority will approve binding corporate rules in accordance with the consistency mechanism in Article 63.

  • Article 48

Transfers or disclosures not authorized by Union law

Any decision by a court or administrative authority in a third country to transfer or disclose personal data is only enforceable if the decision is based on an international agreement.

  • Article 49

Derogations for specific situations

If there is no adequacy decision (Article 45) or appropriate safeguards, a transfer of personal data to a third country or organization can only happen if one of seven certain conditions are met.

  • Article 50

International cooperation for the protection of personal data 

The Commission and supervisory authority have to do their best to further cooperation with third countries and international organizations.

Chapter 6 Independent Supervisory Authority

This chapter requires that each Member State have a competent supervisory authority with certain tasks and powers.

Section 1 Independent status

  • Article 51

Supervisory authority 

Each Member state has to supply at least one independent public authority to enforce this regulation.

  • Article 52

Independence 

Each supervisory authority has to act with complete independence, and its members have to remain free from external influence.

  • Article 53

General conditions for the members of the supervisory authority

Member states need to appoint members of the supervisory authority in a transparent way, and each member must be qualified.

  • Article 54

Rules on the establishment of the supervisory authority

Each Member State needs to provide, in law, the establishment of each supervisory authority, qualifications for members, rules for appointment, etc.

Section 2 Competence, tasks, and powers

  • Article 55

Competence 

Each supervisory authority must be competent to perform the tasks in this Regulation.

  • Article 56

Competence of the lead supervisory authority

The supervisory authority of a controller or processor that is doing cross-border processing will be the lead supervisory authority.

  • Article 57

Tasks

In its territory, each supervisory authority will monitor and enforce this Regulation, promote public awareness, advise the national government, provide information to data subjects, etc.

  • Article 58

Powers

Each supervisory will have investigative, corrective, authorization, and advisory powers.

  • Article 59

Activity Report

Each supervisory authority must write an annual report on its activities.

Chapter 7 Co-operation and Consistency 

This chapter outlines how supervisory authorities will cooperate with each other and ways they can remain consistent when applying this Regulation and defines the European Data Protection Board and its purpose.

Section 1 Cooperation

  • Article 60

 Cooperation between the lead supervisory authority and the other supervisory authorities concerned 

The lead supervisory authority will cooperate with other supervisory authorities to attain information, mutual assistance, communicate relevant information, etc.

  • Article 61

Mutual assistance

Supervisory authorities must provide each other with relevant information and mutual assistance in order to implement and apply this regulation.

  • Article 62

Joint operations of supervisory authorities 

Where appropriate, supervisory authorities will conduct joint operations.

Section 2 Consistency

  • Article 63

Consistency mechanism 

For consistent application of this Regulation, supervisory authorities will cooperate with each other and the Commission through the consistency mechanism in this section.

  • Article 64

Opinion of the Board 

If a supervisory authority adopts any new measures, the Board will issue an opinion on it.

  • Article 65

Dispute resolution by the Board 

The Board has the power to resolve disputes between supervisory authorities.

  • Article 66

Urgency Procedure 

If there is an urgent need to act to protect data subjects, a supervisory authority may adopt provisional measures for legal effects that do not exceed three months.

  • Article 67

Exchange of information 

The Commission may adopt implementing acts in order to specify the arrangements for the exchange of information between supervisory authorities.

Section 3 European data protection board

  • Article 68:

European Data Protection Board 

The Board is composed of the head of one supervisory authority from each Member state.

  • Article 69

Independence 

The Board must act independently when performing its tasks or exercising its powers.

  • Article 70

Tasks of the Board

The Board needs to monitor and ensure correct application of this Regulation, advise the Commission, issue guidelines, recommendations, and best practices, etc.

  • Article 71

Reports

The Board will write an annual public report on the protection of natural persons with regard to processing.

  • Article 72

Procedure 

The Board will consider decisions by a majority vote and adopt decisions by a two-thirds majority.

  • Article 73

Chair 

The Board elects a chair and two deputy chairs by a majority vote. Terms are five years and are renewable once.

  • Article 74

Tasks of the chair 

The Chair is responsible for setting up Board meetings, notifying supervisory authorities of Board decisions, and makes sure Board tasks are performed on time.

  • Article 75

Secretariat 

The European Data Protection Supervisor will appoint a secretariat that exclusively performs tasks under the instruction of the Chair of the Board, mainly to provide analytical, administrative, and logistical support to the Board.

  • Article 76

Confidentiality

Board discussions are confidential.

Chapter 8 Remedies, Liability, and Penalties 

This chapter covers the rights of data subjects to judicial remedies and the penalties for controllers and processors.

  • Article 77

Right to lodge a complaint with a supervisory authority 

Every data subject has the right to lodge a complaint with a supervisory authority.

  • Article 78

Right to an effective judicial remedy against a supervisory authority 

Each natural or legal person has the right to a judicial remedy against a decision of a supervisory authority.

  • Article 79

Right to an effective judicial remedy against a controller or processor

Each data subject has the right to a judicial remedy if the person considers his or her rights have been infringed on as a result of non-compliance processing.

  • Article 80 

Representation of data subjects 

Data subjects have the right to have an organization lodge a complaint on his or her behalf.

  • Article 81

Suspension of proceedings 

Any court in a Member State that realizes proceedings for the same subject that is already occurring in another Member State can suspend its proceedings.

  • Article 82

Right to compensation and liability

Any person who has suffered damage from infringement of this Regulation has the right to receive compensation from the controller or processor or both.

  • Article 83

General conditions for imposing administrative fines 

Each supervisory authority shall ensure that fines are effective, proportionate, and dissuasive. For infringements of Articles 8, 11, 25 to 39, 41, 42, and 43 fines can be up to $10,000,000 or two percent global annual turnover. For infringements of Articles 5, 6, 7, 9, 12, 22, 44 to 49, and 58 fines can be up to $20,000,000 or four percent of global annual turnover.

  • Article 84:

Penalties 

Member States can make additional penalties for infringements.Chapter 9 Provisions Relating to Specific Processing Situations

This chapter covers some exceptions to the Regulation and enables Member States to create their own specific rules.

  • Article 85

Processing and freedom of expression and information 

Member States have to reconcile the protection of personal data and the right to freedom of expression and information (for journalistic, artistic, academic, and literary purposes).

  • Article 86

Processing and public access to official documents 

Personal data in official documents for tasks carried out in the public interest may be disclosed for public access in accordance with Union or Member State.

  • Article 87

Processing of the national identification number 

Member States can determine the conditions for processing national identification numbers or any other identifier.

  • Article 88

Processing in the context of employment 

Member States can provide more specific rules for processing employees’ personal data.

  • Article 89:

Safeguards and derogations relating to processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes

Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes is subject to appropriate safeguards (data minimization and pseudonymization).

  • Article 90

Obligations of secrecy 

Member States can adopt specific rules for the powers of the supervisory authorities regarding controllers’ and processors’ obligation to secrecy.

  • Article 91

Existing data protection rules of churches and religious associations 

Churches and religious associations or communities that lay down their own rules for processing in order to protect natural persons can continue to use those rules as long as they are in line with this Regulation.

Chapter 10 Delegated Acts and Implementing Acts

  • Article 92

Exercise of the delegation 

The Commission has the power to adopt delegated acts. Delegation of power can be revoked at any time by the European Parliament or the Council.

  • Article 93

Committee procedure 

The Commission will be assisted by a committee.

Chapter 11 Final Provisions

This chapter explains the relationship with this Regulation to past Directives and Agreements on the same subject matter, requires the Commission to submit a report every four years, and enables the commission to submit legislative proposals.

  •  Article 94

Repeal of directive 95/46/EC 

1995 Directive 95/46/EC is repealed (The old personal data processing law).

  • Article 95

Relationship with Directive 2002/58/EC 

This Regulation does not add obligations for natural or legal persons that are already set out in Directive 2002/58/EC (has to do with the processing of personal data and the protection of privacy in the electronic communications sector).

  • Article 96

Relationship with previously concluded Agreements 

International agreements involving the transfer of data to third countries or organizations that were setup before 24 May 2016 will stay in effect.

  • Article 97

Commission reports 

 Every four years the Commission will submit a report on this Regulation to the European Parliament and to the Council.

  • Article 98

Review of other Union legal acts on data protection

The Commission can submit legislative proposals to amend other Union legal acts on the protection of personal data.

  • Article 99

Entry into force and application 

The Regulation applies from 25 May 2018.

1      What’s Data Privacy Law in Your Country?

When creating the content for your website, legal notices like your Terms of Service, Cookie Notifications, and Privacy Policies are often an afterthought.

Blog posts might be a lot more fun to write, but neglecting to give your readers the right information can get you in legal trouble.

You might think only the giants like Google and Facebook really need a Privacy Policy, or websites that handle data like credit card numbers or social security numbers.

In reality, many of the countries with modern data privacy laws have rules in place for handling any kind of information that can identify an individual or be used to do so.

Even if you just collect names and email addresses for your newsletter, display a few Google Ads on your site, or use browser cookies to get traffic analytics, you’re required by law in many jurisdictions to inform your audience of certain facts and policies of your website.

If you don’t, or if you just use a generic Privacy Policy template that doesn’t accurately reflect your policies, you could be threatened with legal action from your website visitors or your government, and end up paying huge fines or legal fees – or even face jail time.

Why take the risk? Save yourself the time, trouble, and expense of legal consequences, and get up to speed on your country’s privacy policy laws right here.

2      Privacy Laws by Country

Laws regarding privacy policy requirements for websites are generally included in information privacy or data protection laws for a country. These laws govern how information on private individuals can be used. A relatively recent legal development, privacy laws have now been enacted in over 80 countries around the world.

Argentina

Argentina’s Personal Data Protection Act of 2000 applies to any individual person or legal entity within the territory of Argentina that deals with personal data. Personal data includes any kind of information that relates to individuals, except for basic information such as name, occupation, date of birth, and address.

“Personal data” can, however, include the use of browser cookies. If you track your visitors using an analytics service, or if you use an ad network that uses cookies, then these policies will apply to you.

There is some legal disagreement about whether IP addresses count as personal data, with experts on both sides of the issue. To be on the safe side, you likely want to obtain consent if you collect any information regarding an individual’s IP address, or use cookies in any way.

According to Argentina’s laws concerning privacy, it’s only legal to handle or process personal data if the subject has given prior informed consent. Informed consent means you must tell them the purpose for gathering the data, consequences of refusing to provide the data or providing inaccurate information, and their right to access, correct, and delete the data. Also, any individual can request deletion of their data at any time.

Australia

Australia’s Privacy Principles (APP) is a collection of 13 principles guiding the handling of personal information. According to these principles, you must manage personal information in an open and transparent way, which means having a clear and up-to-date Privacy Policy about how you manage personal information.

Privacy Policies, according to Australian law, need to detail why and how you collect personal information, the consequences for not providing personal information, how individuals can access and correct their own information, and how individuals can complain about a breach of the principles.

One of the roles of the Office of the Australian Information Commissioner (OAIC) is to investigate any privacy complaints about the handling of your personal information. Anyone can make a complaint to the office for free at any time, and the office will investigate as soon as possible.

In order to avoid complaints about your handling of personal information, it’s important to have a clear and accurate Privacy Policy that includes all the requirements laid out by the APP.

Brazil

Brazil passed the Brazilian Internet Act in 2014 which deals with policies on the collection, maintenance, treatment and use of personal data on the Internet.

Any Brazilian individual and legal entity must obtain someone’s prior consent before collecting their personal data online, in any way. Consent can’t be given by those under 16 years old, and from 16 to 18 years old they must have assistance from their legal guardian to give consent. So, before collecting any information, be sure to ask whether the user is over 18 years of age.

It also states that your terms and conditions about how you collect, store, and use personal data need to be easily identifiable by your users, which means having an accurate and easy to understand privacy policy.

Canada

Canada’s Personal Information Protection and Electronic Data Act (PIPEDA) governs how you can collect, store, and use information about users online in the course of commercial activity. According to the act, you must make information regarding your privacy policies publicly available to customers.

Your Privacy Policy should be easy to find and to understand, and be as specific as possible about how you collect, handle, and use information.

For more information, check out the Privacy Toolkit and Fact Sheet from the Office of the Privacy Commissioner of Canada.

Chile

According to Chile’s Act on the Protection of Personal Data, passed in 1998, personal data can only be collected when authorized by the user. You also need to inform users of any sharing of information with third parties (such as if you have an email newsletter provider like MailChimp or AWeber that you share emails with).

However, you don’t need to get authorization for basic information like a person’s name or date of birth, or if you’re only using the data internally to provide services or for statistical or pricing purposes.

Colombia

Colombia’s Regulatory Decree 1377 states that you must inform users of the purpose their data will be used for, and you can’t use the data for any other purpose without obtaining consent.

Privacy Policies must include a description of the purpose and methods for processing data, the users’ rights over their data and the procedures for exercising those rights, and identification of who is responsible for handling the data.

Czech Republic

Act No. 101/2000 Coll., on the Protection of Personal Data governs how personal data is collected by anyone in the Czech Republic.

If you collect any kind of information relating to an identifiable person, you need to inform them of the purpose for collecting the data and the way it’s collected, and obtain their consent.

Denmark

Denmark passed the Act on Processing of Personal Data in 2000. The Danish Data Protection Agency supervises and enforces the privacy laws. If they discover violations of the law, they can issue a ban or enforcement notice, or even report the violation to the police.

According to the law, personal data can only be collected if the user gives explicit consent. Also, a company can’t disclose personal information to third parties for the purpose of marketing without consent.

Estonia

The Personal Data Protection Act of 2003 in Estonia states the personal data needs to be collected in an honest and legal way. You must obtain consent from users, and inform them of the purpose of collecting their data, and only use it in that way. A Privacy Policy is the key way to inform users.

European Union

The General Data Protection Regulation (GDPR) became enforceable in 2018 and is to date the most robust privacy protection law in the world. It has since inspired other laws around the world to up their requirements and has inspired the creation of new laws.

The GDPR protects people in the EU from unlawful data collection or processing and works to increase consent requirements, provide enhanced user rights and require a Privacy Policy that’s written in an easy-to-understand way.

Finland

The Personal Data Act governs the processing of personal data gathered in Finland, where privacy is considered a basic right. Anyone who gathers personal data in Finland must have a clearly defined purpose for gathering the data, and may not use it for any other purpose.

Personal data can only be gathered after obtaining unambiguous consent from the user.

The controller (the person or corporation collecting the data) of the collected data also needs to create a description of the data file, including their name and address and the purpose for collecting the data. This description needs to be made available to anyone.

There are also special restrictions that apply if you’re collecting data for the purpose of direct marketing or other personalized mailing related to marketing. Your database must be limited to basic information and contact information (no sensitive data can be collected).

France

The Data Protection Act (DPA) of 1978 (revised in 2004) is the main law protecting data privacy in France. The Postal and Electronics Communications Code also touches on the collection of personal data when it’s used for sending electronic messages.

The DPA applies to the collection of any information that can be used to identify a person, which is very broad in scope. The rules apply to anyone collecting data who is located in France or who carries out its activities in an establishment in France (such as if your hosting server or other service provider related to collecting or processing data is located in France). This is why the French Data Protection Authority was able to fine Google for violating their privacy laws.

Before automatically processing any kind of personal data, you must obtain the consent of the subject, and inform them of a number of things, including the purpose of the processing, the identity and address of the data controller, the time period the data will be kept, who can access the data, how the data is secured, etc.

Germany

In Germany, the Federal Data Protection Act of 2001 states that any collection of any kind of personal data (including computer IP addresses) is prohibited unless you get the express consent of the subject. You also have to get the data directly from the subject (it’s illegal to buy email lists from third parties, for example).

According to the act’s Principle of Transparency section, the subject must be informed of the collection of the data and its purpose. Once the data is collected for a specific purpose, you can’t use it for any other purpose without getting additional consent.

These laws apply to any collection of data on German soil, and Federal Data Protection Agency and 16 separate state data protection agencies enforce them.

Greece

The Processing of Personal Data laws in Greece protect the rights of individuals’ privacy in regard to electronic communications.

The processing of personal data is only allowed in Greece if you obtain consent after notifying the user of the type of data and the purpose and extent of processing. Consent can be given by electronic means if you ensure that the user is completely aware of the consequences of giving consent. Also, they can withdraw consent at any time.

Hong Kong

Hong Kong’s Personal Data Ordinance states that users must be informed of the purpose of any personal data collection, and the classes of persons the data may be transferred to (such as if you use any third-party services for processing data, like a email newsletter service).

The openness principle of the ordinance states that your personal data policies and practices must be made publicly available, including what kind of data you collect and how it’s used.

If you’re in violation of the Personal Data Ordinance, you could face fines up to HK$50,000 and up to 2 years in prison, and you could be sued by your users as well.

Hungary

In Hungary, the privacy of personal data is protected by Act LXIII of 1992 on the Protection of Personal Data and the Publicity of Data of Public Interests. Its main purpose is to ensure that individuals have control over their own data.

According to the act, you must obtain a person’s consent in order to handle their personal data. You can only collect data with an express purpose, and you must inform the user that handing over their personal data is voluntary.

If you violate the act, then your users may sue you, and you may be liable to pay for any damage you cause by mishandling their data.

Iceland

Iceland has been called the ‘Switzerland of data‘ for its strict privacy laws. The Data Protection Act of 2000 states that data must be obtained for specific purposes, and only after the subject has given unambiguous and informed consent.

In order to give consent, they must be made aware of the type of data collected, the purpose of the collection, how the data processing is conducted, how their data is protected, and that they can withdraw their consent at any time.

Not obeying the act could result in fines or even a prison term up to 3 years.

Ireland

In Ireland, the privacy of personal data is regulated by the Data Protection Act 1988, including a 2003 amendment. There’s also the ePrivacy Regulations 2011 (S.I. 336 of 2011), which deals with electronic communication.

Ireland differentiates between an organization’s Privacy Policy and their public Privacy Statement. A Privacy Policy is a detailed legal document that explains how the organization applies all the 8 data protection principles of the law.

A Privacy Statement, on the other hand, is a public document on a website that clearly and concisely declares how the organization applies the principles to how they collect personal data (including the use of browser cookies) through their website.

It’s a legal requirement for any organization in Ireland to have a public Privacy Statement on its website.

If your website collects any kind of personal information or tracks users with cookies, and you don’t have a privacy statement, you could be investigated by the Data Protection Commissioner and fined up to €100,000.

India

In India, the Information Technology Act clearly states that every business must have a privacy policy published on its website, whether or not you deal with sensitive personal data. The Privacy Policy needs to describe what data you collect, the purpose of the data, any third parties it might be disclosed to, and what security practices you use to protect the data.

Certain sensitive data, including passwords or financial information, can’t be collected or processed without the prior consent of the user.

Italy

Italy’s Data Protection Code states has strict rules for any kind of electronic marketing. According to the code, you must obtain a user’s consent before tracking them or using data for advertising or marketing communications. You must provide the users with specific information before collecting or processing their data, including the purpose and methods for processing the data and their individual rights under the law.

The Italian Data Protection Authority protects the rights of individuals regarding the privacy of their personal data. They can impose fines, such as the million-euro fine they threatened Google with for violating Italian privacy regulations.

Japan

In Japan, the Personal Information Protection Act protects the rights of individuals in regard to their personal data. The definition of personal data in the act is very broad, and even applies to information that could be found in a public directory.

The act states that you must describe as specifically as possible the purpose of the personal data you’re collecting. Also, in order to share the personal data with any third party (such as an email newsletter service) you must obtain prior consent.

Latvia

The Personal Data Protection Law of Latvia applies to the processing of all kinds of personal data. It states that you may only process personal data after obtaining the consent of the user. When you collect personal data, you must inform them of specific information, including the purpose for collecting their data, any third parties that might have access to their data, and their individual rights to protect their own data under the law.

Lithuania

Lithuania’s Law on Legal Protection of Personal Data states that in order to collect and process any kind of personal information that can identify an individual, you must obtain clear consent from the individual first. The law says that consent can only be defined as consent if the individual agrees for their data to be used for a specific purpose known to them, so you need to let users know exactly why you’re collecting their data, and how you’re going to use it, in order for their consent to be legally valid.

Luxembourg

In Luxembourg, Law of 2 August 2002 on the Protection of Persons with Regard to the Processing of Personal Data states that users must give informed consent before their data can be collected and processed. The user must be informed of your identity, your purpose for collecting their data, any third parties with access to their data, and their specific rights regarding their data.

Anyone in violation of the law could face prison time between 8 days to 1 year and/or a fine of anywhere from 251 to 125,000 euros.

Malaysia

Malaysia’s Personal Data Protection Act 2010 protects any personal data collected in Malaysia from being misused. According to the act, you must obtain the consent of users before collecting their personal data or sharing it with any third parties. In order for their consent to be valid, you must give them written notice of the purpose for the data collection, their rights to request or correct their data, what class of third parties will have access to their data, and whether or not they’re required to share their data and the consequences if they don’t.

Malta

In Malta, the right to privacy is considered a fundamental human right, and is protected in part by the Data Protection Act of 2001. The act states that personal data can only be collected and processed for specific, explicitly stated and legitimate purposes, and that the user must give their informed and unambiguous consent before it’s collected. For their consent to be valid, you must inform them of your identity and residence, the purpose of the data collection, any other recipients of the data, whether their participation is required or voluntary, and all about their applicable rights to access, correct, or erase the data.

Mexico

In Mexico, the Federal Law for the Protection of Personal Data Possessed by Private Persons deals with the privacy of personal data. The law says that you can only collect personal data for the reasons stated in your Privacy Policy, and that you must obtain consent for collecting and processing any personal data that isn’t publicly available. You also have an obligation to inform users of their rights regarding the data collected.

Morocco

Morocco’s Data Protection Act defines personal data as any information of any nature that can identify an individual person. In order to collect or process any personal data, it needs to be for a specific purpose, and you must obtain the express consent of the user before you collect it, unless the data was already made public by that individual.

For their consent to be valid, you need to inform the person of your identity, the purpose of the data collection, and their rights regarding their own data.

The National Commission for the Protection of Personal Data, established in 2010, conducts investigation and inquiries related to privacy laws. Breaking the law can be punishable by fines or even imprisonment.

The Netherlands

In the Netherlands, the Dutch Personal Data Protection Act states that you must obtain the unambiguous consent of the user before collecting or processing any information that personally identifies them.

New Zealand

According to New Zealand’s Privacy Act of 1993, you must collect any non-public personal information directly from the individual, and make sure they’re aware of your name and address, the purpose for the data collection, any recipients of that data, whether the collection is required by law or optional, and their rights regarding their own data.

Any user may make a complaint and possibly trigger an investigation into whether you’re following the law when collecting their personal data.

Norway

Norway’s Personal Data Act states that personal data can only be collected after obtaining the consent of the user. Before asking for consent, you need to inform them of your name and address, the purpose of the data collection, whether the data will be disclosed to third parties and their identities, the fact that their participation is voluntary, and their rights under the law.

The Philippines

The Philippines is known for having “one of the toughest data privacy legislations in the region.” In the Philippines, anyone who collects personal data needs to get specific and informed consent from the user first. You must declare the purpose of the data processing before you begin to collect it (or as soon as reasonably possible after).

Under the Republic Act No. 10173, individuals have the right to know your identity, what personal data you’re collecting and for what purpose, how it’s being processed, who it’s being disclosed to, and all their rights regarding their own data.

Romania

In Romania, the law states that you must inform users of their rights when collecting any kind of personal data, including their name. You also need to obtain their “express and unequivocal consent” beforehand.

Poland

Poland’s Act of the Protection of Personal Data, passed in 1997, states that the processing of data is only permitted if the data subject has given their consent. You’re also obliged to provide your name and address, the purpose of the data collection, any other recipients of the data, the subject’s rights, and whether participation is required or voluntary.

Portugal

According to Portugal’s Act on the Protection of Personal Data, the processing of data needs to be carried out in a transparent manner, respecting the privacy of your users. Personal data can only be collected for specific and legitimate purposes, and only after obtaining the unambiguous consent of the user. You must also provide the user with specific information including your identity, the purpose of the data processing, any other recipients of the data, etc.

Singapore

In Singapore, personal data is protected under the Personal Data Protection Act. According to the act, you may only collect personal data only with the consent of the individual, and the individual must be informed of the purpose for the data collection.

Slovenia

Slovenia’s Personal Data Protection Act states that you must obtain the informed consent of an individual before collecting or processing their personal data. In order for their consent to be valid, you need to inform them of your identity and the purpose of the data collection. You also need to inform them of any other information necessary to ensure that their data is being processed in a lawful and fair manner.

South Africa

South Africa’s Electronic Communications and Transactions Act applies to any personal data collected through electronic transactions, such as through a website. The act sets out nine principles that you must agree to in order to collect any personal data, and also requires that you disclose in writing to the subject the specific purpose of the data collection, and obtain their express consent before collecting their data.

South Korea

In South Korea, the Act on Promotion of Information and Communications Network Utilization and Data Protection states that any information and communications service provider needs to obtain the consent of the user before collecting personal information. In order for the consent to be valid, you must provide the user with specific information including your name and contact information, the purpose of the data collection, and the user’s rights concerning their own data.

The Framework Act on Telecommunications provides the definition of “information and communications service providers” as “services that mediate a third party’s communication through the telecommunications facilities and equipment or to provide the telecommunications facilities and equipment for the third party’s telecommunications.”

Spain

In Spain, the protection of personal data is regarded as a constitutional right. In order to collect any personal data, you need to provide the user with “fair processing information” including your identity and address, the purpose of the data processing, their rights under the law, whether participation is voluntary or mandatory, and any consequences for not providing their personal data.

Switzerland

Switzerland’s Federal Act on Data Protection states that any personal data collection or processing must be done in good faith, and that it needs to be evident to the user, especially the purpose of the data collection. In other words, you must inform the user that you’re collecting their personal data, and why. Personal data is defined as “all information relating to an identified or identifiable person.”

Sweden

In Sweden, the Personal Data Act protects the privacy of personally identifying information, which it loosely defines as any data that, directly or indirectly, is refers to a live person. It states that users are entitled to information concerning processing of their personal data, and that they must give consent before you can collect their data. Consent must be informed, voluntary, specific, and unambiguous.

Anyone who violates the act may be liable to pay fines or even sentenced to criminal penalties.

Taiwan

The Computer-Processed Personal Data Protection Law in Taiwan relates to specific kinds of personal data, including an individual’s name, date of birth, “social activities,” and any other data that can identify that individual. Data collection needs to be in good faith and in consideration of individuals’ rights. Any organization that collects personal data must publish a document that includes specific information including their name and address, the purpose and methods for the data collection, and any other recipients of the data.

United States

In the United States, data privacy isn’t as highly legislated on a federal level as most of the other countries on this list. Like with many issues, the federal government leaves a lot of the details up to each state. Laws also differ depending on the industry, which results in a confusing mess of rules and regulations for US website owners to navigate.

The FTC (Federal Trade Commission) regulates business privacy laws. They don’t require privacy policies per se, but they do prohibit deceptive practices.

Some federal laws that touch on data privacy include the Health Insurance Portability and Accountability Act of 1996 (HIPPA), which deals with health-related information, and the Children’s Online Privacy Protection Rule (COPPA), which applies to websites that collect data from children under the age of 13. Some states have more stringent laws than others, such as the California Online Privacy Protection Act (CalOPPA), which is the first law in the United States that specifically requires websites to post a Privacy Policy.

CalOPPA actually applies not just to websites based in California, but to any website that collects personal data from consumers who reside in California. With that in mind, website owners based in the United States are encouraged to err on the side of caution so they don’t run into legal trouble inadvertently.

CalOPPA requires that every website that collects personal data from users post a privacy policy that includes:

  • The type of personal data collected
  • Any third parties you share the data with
  • How users can review and change their data that you’ve collected
  • How you’ll update users of changes to your Privacy Policy
  • Your Privacy Policy’s effective date
  • How you’ll respond to Do Not Track requests

If there’s any chance that you’ll be collecting personal data from anyone in California, it’s best to comply with this law by creating an accurate privacy policy.

A few additional laws to be aware of in the US include the California Consumer Privacy Act (CCPA) and the Washington Privacy Act (WPA).

United Kingdom

In the UK, the mission of the Information Commissioner’s Office is to “uphold information rights in the public interest.”

The Data Protection Act requires fair processing of personal data, which means that you must be transparent about why you’re collecting personal data and how you’re going to use it. The law also states that if you use browser cookies, you need to clearly explain what they do and why you’re using them, and gain the informed consent of your users.

3      You Need a Privacy Policy

It may seem like overkill to create a complete Privacy Policy if you’re just collecting names and email addresses for your monthly newsletter, but in the Age of Information, it’s important to respect the importance of personal data and the privacy rights of your website users. Being transparent about how you collect and protect data will not only keep you out of trouble with the law, but will also help to establish trust with your audience.

The best thing you can do to be compliant with almost any privacy law is to have a transparent, informative Privacy Policy posted on your website or mobile app and keep it easy to read and up to date.

How to Conduct a Data Protection Impact Assessment

Of the many new measures imposed by the General Data Protection Regulation (GDPR), the requirements surrounding Data Protection Impact Assessments often cause the most confusion. Many business owners have no idea what the document is for or when it is required.

In this article, we’ll wade through the terminology to explain the complexities of Data Protection Impact Assessments so you can do your own successful assessment and document it in the best way possible.

How to Conduct a Data Protection Impact Assessment. 1

1…. What is the Purpose of a Data Protection Impact  Assessment?.. 2

2…. When is a Data Protection Impact Assessment  Necessary?.. 2

3…. When is a Data Protection Impact Assessment Not  Necessary?.. 4

4…. How to Perform a Data Protection Impact  Assessment. 5

4.1…. Describe Data Flows. 5

4.2…. Data Scope. 6

4.3…. Purposes of Data Processing.. 7

4.4…. Context of the Processing and Data Subjects. 8

5.Document Proper Consultation.. 11

6.Specific Compliance Measures. 15

7. .. Identify and Evaluate Data Protection Risks. 17

8.Risk Mitigation Strategies. 17

9. .. Approval & Sign-Off. 18

1      What is the Purpose of a Data Protection Impact  Assessment?

Data Protection Impact Assessments (DPIAs) are used to investigate, recognise, and mitigate potential risks to data before launching a new business endeavour or project.

By performing a DPIA before a new project, you can hope to:

  • Better understand the data protection risks that will be faced during the project
  • Calculate methods to decrease or eliminate those risks
  • Decide if the benefits of the project outweigh data protection risks
  • Prepare an informed statement that will disclose the risks to any individuals who will be affected
  • Document data protection measures to demonstrate GDPR compliance to supervisory authorities
  • Identify opportunities to incorporate “Data Production by Design” principles into the project

2      When is a Data Protection Impact Assessment Necessary?

According to Article 35 of the GDPR:

“Where a type of [data] processing… is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.”

In other words, if the project presents a high risk to personal data protection and privacy, then a DPIA will be necessary.

But how does one determine what presents a “high risk”? The GDPR and the Article 29 Working Party provide some examples of projects that would definitely call for a DPIA:

  • An extensive evaluation of consumer information in which decisions are made based upon automatic processing and profiling

Example:

A technology that uses a person’s financial history to automatically determine whether or not that person is eligible for a mortgage.

  • Processing special categories of data (sexual orientation, race, religion, etc.) or criminal offense history. 

Example:

A job board website that collects racial information or criminal history from consumers who wish to apply to online jobs.

  • A systematic monitoring of a public area on a large scale. 

Example:

Using a camera placed on a public road to record and monitor driver behaviour.

  • Evaluation or scoring of individuals, including profiling and predicting. 

Example:

An internet technology that monitors user behaviour and uses that information to build marketing profiles.

  • Automated decision-making with legal or otherwise significant effect on the lives of individuals. 

Example:

A computer program that uses the behavioural history of convicts to automatically determine if they will be granted parole.

  • Consumer data processed on a large scale. Although the term “large scale” is not defined, an example might be an online social network with millions of users.
  • Datasets that have been matched or combined

Example:

Direct marketing endeavours that involve purchasing consumer mailing lists.

  • Data concerning vulnerable data subjects that may be unable to provide valid consent. 

Example:

Processing the data of children or mentally ill individuals.

  • Innovative technological or organisational solutions

Example:

Software that provides user access based on fingerprints or face recognition.

  • When the data processing “prevents data subjects from exercising a right or using a service or a contract.” 

Example:

A credit card company using a person’s credit history as a basis for denying service.

As you can see, there are a lot of different scenarios that would call for a DPIA, and this is far from an exhaustive list. There are many more situations in which a new data processing project could put data protection at risk. A good rule of thumb is, if in doubt, perform a DPIA. When it comes to data security and GDPR compliance, it’s always wise to err on the side of too much rather than too little data protection.

3      When is a Data Protection Impact Assessment Not   Necessary?

In some situations, you can definitely rule out the necessity of a DPIA. These include:

  • Any new project that definitely does not entail a high risk to the rights and freedoms of consumers.
  • If you have already performed a DPIA for a previous project that is very similar, you can use the existing DPIA to demonstrate adequate data protection and compliance.
  • When the data processing project has an established legal basis in the EU.
  • If the data processing activity is on a supervisory authority’s list of permitted projects that do not require a DPIA.

4      How to Perform a Data Protection Impact Assessment

A DPIA should be performed after the details of a new data processing project have been established and planned out, but before the project is actually launched. The GDPR lays out some specific instructions as to what a DPIA should include:

  • detailed description of the project as well as the purpose of the project
  • An assessment of the necessity of the data processing involved and on what scale
  • An assessment of all possible risks to data protection and consumer privacy
  • An explanation as to how those risks will be mitigated and how the project will adhere to GDPR policies

While this may look like a relatively short list, there is a lot of research and effort involved in fulfilling these requirements. Below we’ve laid out steps you can take to create a comprehensive Data Protection Impact Assessment.

4.1     Describe Data Flows

Start by describing how data will be handled throughout the project. Detail is key here, so be as thorough as possible in examining your data processing activities from start to finish.

Here are some questions to ask as you compile this section:

  • How will the data be collected?
  • How will the data be used?
  • Where and how will it be stored?
  • What is the source of the data?
  • Will it be shared with any third party and if so, why?
  • Which high-risk data categories or activities will be involved?

This DPIA performed by Simprints Technology begins by answering some the questions above in detail:

It follows this up with several flowcharts to illustrate data flows, which makes it easy to visualize and really understand what happens with data.

This section of your DPIA may be rather simple if you only work with limited data collected in limited ways, but you can see how this section could get very complicated and lengthy.

4.2     Data Scope

Next, outline the scope of data processing. Here you will need to delve deeply into the data itself, describing the types of data that will be collected, the quantity of data, and so on. This section will differ according the company and project involved, but may cover the following points:

  • What categories of data will be collected?
  • Will it involve special or sensitive categories of data?
  • What quantity of data will be collected and how many consumers will be affected?
  • Is the data processing localized to a specific area?
  • How long will the data be retained?

Although the Privacy by Design Foundation does not go into all of these details at the outset of its DPIA, it provides a generalized scope here:

Note how the section is broken down into subsections to address things like the nature, the purpose, the scope and the context of the processing.

4.3     Purposes of Data Processing

Describe what the project is expected to achieve through data processing. What are the benefits for the data controller and how will consumers be affected?

UK Home Office Biometrics conducted a comprehensive DPIA to analyse new technologies to be used by the police force. This is how it describes the various purposes of the project:

You can see how these can simply be short but descriptive paragraphs discussing the projects. The text itself notes that they are “brief descriptions of the projects.”

4.4     Context of the Processing and Data Subjects

Here is where you start asking some of the more difficult questions. Think about the consumers who will be affected and how this data processing may affect them. This is also a good time to consider the context of the data processing project itself and its position in the industry.

Here are some questions to ask and answer during this phase:

  • What is your legal basis for collecting user data? Do you have appropriate consent measures in place?
  • Is your consumer base vulnerable in any way, such as in the case of children or mentally ill individuals?
  • Has this type of processing been performed before? Are there similar technologies already in place?
  • Have any security flaws been identified in similar projects?

The UK Ministry of Justice employs a question and answer format for DPIAs, asking similar questions to those above in order to establish context:

Later on, in the same document, the privacy context of the new technology is also established:

5.     Document Proper Consultation

Where appropriate and possible, data controllers are required to consult with consumers on their views about the new project. It may also be necessary to consult with your Data Protection Officer, data processors, or information security experts to understand the full implications and risks of the project.

If such consultations are appropriate and possible, you will need to document them in this section.

When proposing a new privacy bill to be passed into law, the Australian Department of the Treasury performed a massive 161-page DPIA to investigate all of the data protection implications that would be involved. This is a small part of the chapter discussing consultation:

6.     Specific Compliance Measures

Any major data processing project will need to address GDPR compliance from the outset. After all, that’s one reason you are conducting a DPIA in the first place. In this section, you will analyse whether or not data processing activities are compliant with the GDPR and other international privacy laws.

This is also a good place to describe what measures the business will be taking to ensure compliance at each phase of the project. Some topics that will need to be approached include:

  • What are the legal bases for the data processing? Will these bases remain valid throughout the duration of the project?
  • Is data processing necessary to achieve the overall purpose?
  • Is there any way to reduce or minimize the use of consumer data throughout the project?
  • How will consumer rights be upheld?
  • How will the data controller confirm that third-party processors also comply with privacy laws?
  • How will international data transfers be legally performed?

Simprints Technology solves this by going through the major tenets of the GDPR and briefly addressing each one:

Later on in the document, data transfers and consumer rights are addressed, thereby touching on all relevant GDPR policies.

7.     Identify and Evaluate Data Protection Risks

This section is considered the most important issue to explore in any DPIA. It is where data protection and privacy are analysed from all angles. Potential threats to privacy and data security must be considered and listed.

Although it is impossible to predict every potential risk scenario in a generalized article like this one, here are some points to review during risk assessment:

  • Are proper controls and safeguards in place to prevent or reduce unsafe data processing practices due to internal employee errors?
  • Is there a possibility that the project might evolve and change the way data is being processed beyond the scope of current legal bases?
  • Has security software been properly updated and audited against potential data theft or hackers?
  • If special categories of sensitive data or vulnerable individuals are subject to data processing, is the project following GDPR-mandated stipulations to protect that data?
  • Could the merging of anonymized data sets lead to individuals being inadvertently identified?
  • Have data retention policies been outlined, and how will data be disposed of when it no longer serves its purpose?
  • Is the information being stored in a location with adequate data security?

Of course, the potential risks to data protection will be conditional to the type of project and data processing that’s involved. If you feel that your development team has not or cannot sufficiently identify potential threats to data protection, it may be necessary to consult the services of an information security expert or an attorney that specializes in privacy law.

8.     Risk Mitigation Strategies

The next step is to formulate solutions and mitigation strategies to reduce or eliminate the risks identified in the assessment phase. All of the previously identified risks to data protection must be addressed in this section, as well as viable mitigation techniques for each.

Many data controllers choose to combine risk assessment and mitigation strategies into one comprehensive table that is easy to read and understand. Home Office Biometrics uses this method:

Conducting this process properly will be beneficial in the long run, especially if a privacy dispute or data incident does occur. This documentation will serve as proof that your business took every measure possible to reduce or eliminate data protection risks before the project ever launched.

9.     Approval & Sign-Off

The final step in the DPIA process is to confirm that the evaluations, findings, and strategies laid out in the DPIA have been approved by the appropriate parties. The person or persons responsible for approving the document will differ according to the company and projects involved. In some cases, it may be a Data Protection Officer, while other organizations may assign approval to a management team.

The UK Ministry of Justice requires approval and sign off by the project manager and the information asset owner:

Some DPIAs also attach a list of outcomes that resulted from the strategies suggested in the DPIA, as well as a plan of action regarding future reviews and data protection audits. These elements are not obligatory, however.

We hope that this article sheds some light on the murky, sometimes confusing process of conducting a Data Protection Impact Assessment. Following the steps above will ideally result in safer data processing practices and a GDPR-compliant approach to new projects, along with the documentation of your efforts.

Europe’s Artificial Intelligence Legal Framework Delivers Responsibilities On Developers, Users, And Importers

An Artificial Intelligence (AI) package was announced on April 21 as the latest example of the European Commission’s ambitious regulatory agenda for the digital ecosystem, with the goal of establishing a “global gold standard” for regulation in this cutting-edge area. A proposal for a Regulation laying down harmonised AI rules (Artificial Intelligence Act), a Coordinated Plan with Member States, and a proposal for a Machinery Regulation are all part of the AI package

The proposed Regulation, which will have an extraterritorial reach, will impose significant new compliance duties on AI developers and users, as well as importers of AI systems. The proposal prohibits the use of high-risk systems that are regarded to be incompatible with EU core principles. Companies that break the data-related standards could face deterrent fines of up to €30 million, or 6% of their global annual turnover in the previous financial year, according to the plan.

However, some stakeholders have already criticised the Draft Regulation for its ambiguous language and absence of a redress mechanism for citizens harmed by AI systems. The option for industry to achieve EU requirements by conducting self-assessments in some sensitive areas, such as the use of AI in the workplace and for border control, has sparked calls for independent oversight from civil society.

A three-year journey

The Artificial Intelligence Act is the result of a three-year process that began in 2018 with the publication of the European Commission’s “Artificial Intelligence for Europe” communication and the formation of a High-Level Expert Group on Artificial Intelligence. The Commission adopted a White Paper on AI in February 2020, based on the Ethics Guidelines and Recommendations issued by this panel, followed by a public consultation that received over 1,000 responses. Only the recently proposed AI Act will become legally binding if it is passed.

Global hub

The proposal’s main political goal is to “transform Europe into a worldwide centre for trustworthy AI.” The Commission’s goal, according to the Commission, is to strike a balance between increasing citizens’ trust in AI systems and boosting investment and innovation in the ongoing development of AI systems based on high-quality data sets. The Commission wanted to connect the new rulebook to the EU fundamental principles, with which the design, development, and use of AI systems will have to adhere, following the model of the General Data Protection Regulation (GDPR).

Risk-based approach

The Commission has taken a risk-based approach, based on the principle that “the higher the risk, the harsher the rule.” As a result, the proposal divides AI applications into four categories:

  1. Unacceptable risks

Artificial intelligence systems that fit into this category are prohibited, as they are believed to be in violation of EU fundamental rights and principles. Exploitative or manipulative actions, such as “practises that have a strong potential to manipulate persons through subliminal tactics,” and AI-based social scoring conducted by public authorities, are among the AI systems that have been banned.

  • High risks

Such high-risk AI systems will only be permitted if they meet a set of necessary requirements, including data governance, documentation, and record keeping, openness and information dissemination to users, human oversight, robustness, accuracy, and security, and ex-ante conformity evaluations. High-risk AI will be identified based on its intended application, which includes systems employed in key infrastructure, educational training, recruiting services, migration and border control tools, justice administration, and law enforcement. The Commission has included real-time biometric technologies (e.g., facial recognition) in this high-risk group, which will be prohibited unless they are regarded strictly required to locate a missing child, to avert a specific and imminent terrorist threat, or to locate the suspect of a major criminal offence.

  • Low risks

Only specified transparent duties will apply to AI systems, such as informing individuals that on the other side, a machine (rather than a human) is engaging with them, such as chatbots.

  • Minimal risks

This last category includes AI systems that are not regarded a risk or a threat to citizens’ fundamental rights, and for which no explicit responsibility will be imposed.

A new enforcer

In terms of governance, the Commission suggests that national competent market surveillance authorities perform checks and assessments, though some AI providers will be permitted to do technical self-assessments. A European Artificial Intelligence Board, made up of Commission and Member State members, is also proposed, which will have a big say in deciding the proposal’s practical ramifications. Furthermore, the Commission will run an EU-wide database where suppliers must register stand-alone high-risk AI systems before they can be put on the market.

Points of contention

The use of AI systems for facial recognition is expected to be a key battleground when it comes to the intersection of AI policy and privacy issues. The European Data Protection Supervisor has previously called for a moratorium on this contentious technology, and some European Parliament members are prepared to fight for a complete ban.

Stakeholders have also raised concerns about a lack of clarity and legal certainty in the definitions of high-risk AI systems and “subliminal techniques” utilised in forbidden AI systems.

While the European Parliament had previously advocated for a civil responsibility scheme for AI, it appears that the liability aspects of AI system design, development, and use are outside the scope of this proposal. The Commission is expected to address the liability concerns of new technologies as part of a larger review of the EU liability framework in the near future.

Another unresolved question is whether the new restrictions will apply to AI systems incorporated in online platforms, due to the imprecise phrasing in Recital 9. Meanwhile, critical AI developers and users will want to make sure the risk assessment process isn’t too time-consuming.

Next steps

The AI Act will go through the normal EU legislative process and, once enacted by both the European Parliament and the Council, will be binding on all EU member states. The Commission wants the regulation to take effect one and a half years after it is passed, but the AI Board should be up and operating before then.

An Interview with MDEC Discussing Artificial Intelligence in Malaysia

I would like to thank MDEC for taking the time in doing this interview and to talk about Artificial Intelligence (AI) in Malaysia. It was a pleasure to learn more about MDEC’s various approaches to the growing AI industry in Malaysia on numerous topics including, the government’s national strategy on AI, ethics and human rights [don’t think we covered any of these areas], AI-related data protection and privacy issues, trade implications for AI and more, within the key jurisdictions.

Advances in data collection and aggregation, algorithms, and computer processing power have enabled scientists and engineers to make great strides in developing AI. Suddenly machines can perform tasks that once required human cognition. In the past, computers could execute only the rigidly defined tasks for which they were programmed. Now they can be given a general strategy for learning, enabling them to adapt to new data without being explicitly reprogrammed. Many such “machine learning” systems already have been put to commercial use. Adoption is growing around the world in sectors such as finance, health care, and transport—and these systems are beginning to have an impact on the region encompassing ten countries that make up the Association of Southeast Asian Nations (ASEAN).

In October 2017, the Malaysian government announced plans to develop a National AI Framework as an expansion of the existing National Big Data Analytics Framework. The development of the framework will be led by the Malaysia Digital Economy Corporation (MDEC).

The government also stated that it would establish the Digital Transformation Acceleration Programme (D-TAP) and introduce a “Cloud First” strategy, in addition to its existing Malaysia Tech Entrepreneur Programme (M-TEP).

Around March 2018, the International Trade and Industry (MITI) officiated the Towards Autonomous Technologies Conference 2018, a collaborative effort between Malaysian Investment Development Authority (MIDA), Collaborative Research in Engineering, Science and Technology (CREST) and DRB-HICOM University.

There are currently no specific laws or regulations related to autonomous vehicles in Malaysia. In November 2018, it was reported that the Malaysian government will seek Japan’s assistance on investment in the AI industry in the quest to take its technologies to a more advanced level.

The first meeting by the National Digital Economy and Fourth Industrial Revolution (4IR) Council on November 2020 has identified six clusters as the driving components in the digital economy and 4IR agenda. This will enhance Malaysia’s ability in optimising the development of 4IR technology and ensuring that the growth of the digital economy is in line with the Shared Prosperity Vision 2030 and the 2030 Agenda for Sustainable Development.

  1. What is the current state of the law and regulation governing AI in Malaysia? How would you compare the level of regulation with that in other jurisdictions?

As far as MDEC is aware Malaysia currently do not have any legislation or regulation governing AI.

ASEAN countries does not have any regional overarching AI legal framework and, further, individual countries in the region do not have any laws or regulations that specifically address AI. Of course, many countries have laws and regulations that would apply to AI technologies. Examples of types of laws that may apply include are data protection laws where they exist, intellectual property laws, product safety and consumer protection regulations, medical devices regulations, financial services regulations and cybersecurity laws.

However, countries in the South-Eastern Asian region have varied levels of developed applicable legislation. For those countries that do have applicable laws, compared to other jurisdictions, those countries have a level of regulation similar to countries that rely on the application of laws that are not targeted at AI. Some countries in the region also have developed policy initiatives from which we expect AI-specific laws to be introduced as those initiatives progress.

  • Has the Malaysian government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

On October 2017, as an extension to the National Big Data Analytics (BDA) Framework, MDEC was instructed at the 29th Malaysia Status Company Malaysia Implementation Council Meeting to lead in the development of a National AI Framework. This framework was drafted based on the content carefully chosen and thoughtfully organised from desktop research, numerous round tables, interviews, and framework development labs. MOSTI has recently begun work in developing an AI Roadmap. The National AI Framework is now being made as a reference by MOSTI and its appointed consultant, Universiti Teknologi Malaysia (UTM) to map out the execution details to harness the technological prowess of AI.

There is presently no ASEAN or unified strategy on AI or data sharing in this context. However, many countries in this region have released national strategies and initiatives.

In November 2019, Singapore launched a National AI Strategy. The strategy identifies five national AI projects including transport and logistics, smart cities and estates, healthcare, education, and safety and security.

The Indonesian government has introduced a national strategy that will guide the country in developing AI between 2020 and 2045. The country is to focus its AI projects on education and research, health services, food security, mobility, smart cities, and public sector reform.

It is admirable that, with limited budgets (less than 1% of GDP), researchers in Thailand have devoted themselves to AI development for more than three decades. AI courses were first introduced to Thai universities back in 1975, an incredibly early time for a developing nation. The first AI laboratory was established at the Department of Computer Engineering of Kasetsart University, a school originally established to study the science of farming. Since then, under the leadership of Yuen Poovarawan, a pioneer of computer language processing in Thailand, AI has developed rapidly throughout the country.

When it comes to going digital, Malaysia is making good progress, and not trailing too far behind other APAC countries. According to a 2019 IMD report, Malaysia is ranked 26th in digital competitiveness worldwide — an accolade that’s a credit to the country’s growing and tech-driven economy. This is a well-earned achievement, as the government has been hard at work in pushing digital initiatives forward, which is apparent from its National Budget 2020.

With that in mind, the Malaysian state of Sabah has announced plans to develop a digital platform for facilitating data sharing as a part of its ‘Digitisation towards the era of a Smart Digital Government’ initiative. The plan was for the data-sharing platform to be up and running over a five-year period, until 2024. Ultimately, the goal of the project is to connect the government to governmental sectors, private sector businesses, and the people.

  • What is the Malaysian government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

Although many countries understand the importance of ethics in AI and have included an ethics component in their strategic AI visions, the National AI Framework identifies this as a critical area of focus. Singapore appears to be at the forefront with regard to implementation. For example, in order to address concerns about trust, privacy, transparency and associated issues, the Singapore government has created a regional ethics council, designed to assess ethical principles, define ethical rules of engagement and set ethical policies required in an evolving AI world.

Indonesia is suggesting forming a data ethics board to oversee its development as well as create regulations and setting national standards for AI innovation. AI providers and experts have lauded the move to establish a foundation for AI development. However, they urged the government and other stakeholders to improve on the strategy, anticipate risks and fix current flaws.

  • What is the Malaysian government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

There are no published policies or strategies specific to implications of AI to national security or trade as far as MDEC are aware. However, countries in the region have existing export control regulations that will already apply to certain AI-based products specifically designed for a military end use and have national security implications. Again, as MDEC is aware at this stage, there is not a focus on proposing AI-specific trade controls. Instead, countries with AI initiatives are working to implement more broadly the steps to implement the AI vision set out in policy or national strategy.

  • How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?

As an initial matter, many countries in the region do have data protection laws but do not address data, data-sharing and privacy issues at all. Some countries have implemented laws only recently. Comprehensive data protection and privacy laws or draft laws in Singapore, Indonesia and Thailand mirror the EU General Data Protection Regulation (GDPR) in many ways. Because the GDPR applies to all processing of personal data, countries with GDPR-like laws can look to the European Union for guidance on legal compliance in the context of AI applications that are trained on personal data or involve the processing of personal data.

The recent nature or absence of these types of laws in much of the region will present a significant challenge in the AI field. A lack of developed guidelines for data protection and sharing, and application of law and enforcement, could have a chilling effect on both local and cross-border AI deployment and development, even if other AI laws are implemented.

  • How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

As already mentioned, there is currently no AI-specific legislation in the various countries comprising the ASEAN region. Nevertheless, existing laws and regulations that apply to AI technology are applied and enforced by the relevant government authorities. Countries that have developed data protection laws have data protection authorities that will likely become active with respect to AI monitoring due to the data required for AI to be developed and effective. However, as discussed above, various countries in South-East Asia have set up various committees and authorities to develop an AI vision and strategy, including applicable regulations, the focus of those committees and strategies thus far has been primarily the encouragement of research and development.

  • Has your jurisdiction participated in any international frameworks for AI?

Because the strategies for most countries in the region are in the early stages of development, most countries have not yet participated in any international frameworks for AI including Malaysia

  • What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

Many countries in the region have only recently announced their AI initiatives and strategies, which are noteworthy developments in themselves. In terms of legal developments, the implementation of a comprehensive data protection and privacy law in Indonesia and the passage of a similar draft law in Thailand is significant. We expect that other countries in the region will soon follow suit. Having a proper framework in place to protect privacy and confidentiality, while also allowing for the big data analytics necessary to drive AI and its applications is incredibly important for AI to flourish as intended in the region.

  • Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?

Two areas in the region where the use of AI-based products have considerably grown are the government and healthcare sectors. Smart government projects to improve services have been at the forefront of regional activity. Governments in the region are focused on using AI to increase government speed, efficiency and effectiveness. These initiatives all leverage AI technology to personalise and improve experience. Malaysia, Singapore, Indonesia and Thailand all have smart government initiatives in progress. Most countries in the region also have initiated plans to develop smart cities that use AI, data analysis and innovation to improve the quality of life, efficiency of urban operations and services, while ensuring the city meets the needs of its residents across many aspects of city living.

The digital health sector has also seen an increase of AI-powered solutions through emerging technologies, including:

  • apps that diagnose certain diseases;
  • software tools that assist with treatment of chronic diseases;
  • platforms that facilitate communication between patients and healthcare providers;
  • virtual reality or augmented reality tools that help administer healthcare; and
  • research projects involving big data.
  1. Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

As discussed in question 8, most countries in the region are in the nascent stages of setting out ambitious strategies to develop AI technology and the legislation required to regulate those technologies. At this stage, the focus is on research, development, education and infrastructure. MDEC is not aware of any public pending or proposed legislative or regulatory initiatives.

  1. What best practices would you recommend to assess and manage risks arising in the deployment of AI?

Anyone deploying AI in the ASEAN region should know that, although countries in the region do not have specific AI legislation yet, various laws and regulations may nevertheless apply. Therefore, companies should be aware of these laws in their jurisdiction. In addition to compliance with existing laws, companies should engage with government authorities in the relevant sectors to obtain guidance on the application of law to AI issues and technologies due to the undeveloped nature of law in this area. In that context, companies should look to best practices in other, more developed jurisdictions to assess and manage risks. Separately, companies should look out for draft laws, regulations or policies to stay on top of developments in AI law and regulation, and further seek the opportunity to engage and inform the relevant government authorities to help shape AI policy.

The Inside Track

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

The development of AI technology is affecting virtually every industry and has tremendous potential to promote the public good, including to help achieve the UN Sustainable Development Goals by 2030. For example, in the healthcare sector, AI may continue to have an important role in helping to mitigate the effects of Covid-19 and it has the potential to improve outcomes while reducing costs, including by aiding in diagnosis and policing drug theft and abuse. AI also has the potential to enable more efficient use of energy and other resources and to improve education, transportation, and the health and safety of workers. We are excited about the many great opportunities presented by AI.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

AI has tremendous promise to advance economic and public good in many ways and it will be important to have policy frameworks that allow society to capitalise on these benefits and safeguard against potential harms. Also, as this publication explains, several jurisdictions are advancing different legal approaches with respect to AI. One of the great challenges is to develop harmonised policy approaches that achieve desired objectives. We have worked with stakeholders in the past to address these challenges with other technologies, such as the internet, and we are optimistic that workable approaches can be crafted for AI.

ICO Publishes AI Auditing Framework Draft Guidance

On 19 February 2020 the ICO published its draft guidance on the AI auditing framework for public consultation, which is open until 1 April 2020.

What is the draft guidance?

  • The draft guidance sets out best practice for data protection compliance for artificial intelligence (“AI”). It clarifies how to assess the data protection risks posed by AI and identifies technical and organisational measures that can be put in place to help mitigate these risks.
  • The draft guidance, which is over 100 pages, is not intended to impose additional legal obligations which go beyond the General Data Protection Regulation (“GDPR”), but provides guidance and practical examples on how organisations can apply data protection principles in the context of AI. It also sets out the auditing tools that the ICO will use in its own audits and investigations on AI.
  • The ICO has identified AI as one of its top three strategic priorities, and has issued previous guidance on AI, via its Big Data, AI, and Machine Learning report, and the explAIn guidance produced in collaboration with the Alan Turing Institute. This new draft guidance has a broad focus on the management of several different risks arising from AI systems, and is intended to complement the existing ICO resources.
  • The draft guidance focuses on four key areas: (i) accountability and governance; (ii) fair, lawful and transparent processing; (iii) data minimisation and security; and (iv) the exercise of individual rights. We have summarised key points to note on each of these areas below.

Who does the draft guidance apply to?

  • The draft guidance applies broadly – to both companies that design, build and deploy their own AI systems and those that use AI developed by third parties.
  • The draft guidance explicitly states that it is intended for two audiences; those with a compliance focus such as DPOs and general counsel, and technology specialists such as machine learning experts, data scientists, software developers/engineers and cybersecurity and IT risk managers. It stresses the importance of considering the data protection implications of implementing AI throughout each stage of development – from training to deployment and highlights that compliance specialists and DPOs must be involved in AI projects from the earliest stages to address relevant risks, not simply at the “eleventh hour”.


Key Themes:

1. Accountability and governance

  • The ICO highlights that the accountability principle requires that organisations must be responsible for the compliance of their AI system with data protection requirements. They must assess and mitigate the risks posed by such systems, document and demonstrate how the system is compliant and justify the choices they have made. The ICO recommends that the organisation’s internal structures, roles and responsibility maps, training, policies and incentives to overall AI governance and risk management strategy should be aligned. The ICO notes that senior management, including data protection officers, are accountable for understanding and addressing data protection by design and default in the organisation’s culture and processes, including in relation to use of AI where this can be more complex. The ICO’s view is that this cannot be simply delegated to data scientists or engineering teams.
  • Data Protection Impact Assessments (“DPIAs”). There is a strong focus on the importance of DPIAs in the draft guidance, and the ICO notes that organisations are under a legal obligation to complete a DPIA if they use AI systems to process personal data. The ICO states that DPIAs should not be seen as a mere “box ticking compliance” exercise, and that they can act as roadmaps to identify and control risks which AI can pose. The draft guidance sets out practical recommendations on how to approach DPIAs in the context of AI, including:
  • Key risks and information the DPIA should assess and include. This includes information such as the volume and variety of the data and the number of data subjects, but also highlights that DPIAs should include information on the degree of human involvement in decision making processes. Where automated decisions are subject to human intervention or review, the draft guidance stresses that processes should be implemented to ensure this intervention is meaningful and decisions can be overturned.
  • How to describe the processing. The draft guidance sets out relevant examples on how the processing should be described, for example, the DPIA should include a systematic description of the processing activities and an explanation of any relevant margin for error that could influence the fairness of processing. The ICO suggests that there could be two versions of this assessment – a technical description and a more high-level description of the processing which explains how personal data inputs relate to the outcomes that affect individuals.
  • Stakeholders. The draft guidance emphasises that the views of various stakeholders and processors should be requested and documented when conducting a DPIA. DPIAs should also record the roles and obligations applicable as a controller and include any processors involved.
  • Proportionate. The DPIA should help assess whether the processing is reasonable and proportionate. In particular, the ICO highlights the need to consider whether individuals would reasonably expect an AI system to conduct the processing. In terms of proportionality of AI systems, the ICO states that organisations should consider any detriment to individuals that may follow from bias or inaccuracy in the data sets or algorithms that are used. If AI systems complement or replace human decision-making, the draft guidance states that the DPIA should document how the project will compare human and algorithmic accuracy side-by-side to justify its use.
  • Controller/Processor Relationship. The draft guidance emphasises the importance and challenges of understanding and identifying controller/processor relationships in the context of AI systems. It highlights that as AI involves processing personal data at several different phases, it is possible that an entity may be a controller or joint controller for some phases and a processor for others. For example, if a provider of AI services initially processes data on behalf of a client in providing a service (as a processor), but then processes the same data to improve its own models, then it would become a controller for that processing.
  • The draft guidance provides some practical examples and guidance on the types of behaviours that may indicate when an entity is acting as a controller or processor in the AI context. For example, making decisions about the source and nature of data used to train an AI model, the model parameters, key evaluation metrics, or the target output of a model are identified as indicators of controller behaviour.
  • “AI-related trade-offs”. Interestingly the draft guidance recognises that the use of AI is likely to result in necessary “trade-offs”. For example, further training of a model using additional data points to improve the statistical accuracy of a model may enhance fairness, but increasing the volume of personal data included in a data set to facilitate additional training will increase the privacy risk. The ICO recognises these potential trade-offs and emphasises the importance of organisations taking a risk-based approach; identifying and addressing potential trade-offs and taking into account the context and risks associated with the specific AI system to be deployed. The ICO acknowledges that it is unrealistic to adopt a “zero tolerance” approach to risk and the law does not require this, but the focus is on identifying, managing and mitigating the risks involved.

2. Fair, lawful and transparent processing

  • The draft guidance sets out specific recommendations and guidance on how the principles of lawfulness, fairness and transparency apply to AI.
  • Lawfulness. The draft guidance highlights that the development and deployment of AI systems involve processing personal data in different ways for different purposes and the ICO emphasises the importance of distinguishing each distinct processing operation involved and identifying an appropriate lawful basis for each. For example, the ICO considers that it will generally make sense to separate the development and training of AI systems from their deployment as these are distinct purposes with particular risks and different lawful bases may apply. For example, an AI system might initially be trained for a general-purpose task, but subsequently deployed in different contexts for different purposes. The draft guidance gives the example of facial-recognition systems, which can be used for a wide variety of purposes such as preventing crime, authentication, or tagging friends in a social network – each of which might require a different lawful basis.
  • The draft guidance also highlights the risk that AI models could begin to inadvertently infer special category data. For example, if a model learns to use particular combinations of information that reveal a special category, then the model could be processing special category data, even this is not the intention of the model. Therefore, the ICO notes that if machine learning is being used with personal data, the chances that the model could be inferring special category data to make predictions must be assessed and actively monitored – and if special category data is being inferred, an appropriate condition under Article 9 of the GDPR must be identified.
  • Fairness. The draft guidance promotes two key concepts in relation to fairness: statistical accuracy and addressing bias and discrimination:
  • Statistical accuracy. If AI is being used to infer data about individuals, the draft guidance highlights that ensuring the statistical accuracy of an AI system is one of the key considerations in relation to compliance with the fairness principle. Whilst an AI system does not need to be 100% accurate to be compliant, the ICO states that the more statistically accurate the system is, the more likely it is that the processing will be in line with the fairness principle. Additionally, the impact of an individual’s reasonable expectations needs to be taken into account. For example, output data should be clearly labelled as inferences and predictions and should not claim to be factual. The statistical accuracy of a model should also be assessed on an ongoing basis.
  • Bias and Discrimination. The draft guidance suggests specific methods to address bias and discrimination in models, for example, using balanced training data (e.g., by adding data on underrepresented subsets of the population). The draft guidance also sets out that a system’s performance should be monitored on an ongoing basis and policies should set out variance limits for accuracy and bias above which the systems should not be used. Further, if AI is replacing existing decision-making systems, the ICO recommends that both systems could initially be run concurrently to identify variances.
  • Transparency. The draft guidance recognises that the ability to explain AI is one of the key challenges in ensuring compliance, but does not go into further detail on how to address the transparency principle. Instead, it cross-refers to the explAIn guidance it has produced in collaboration with the Alan Turing Institute.

3. Data minimisation and security

  • Security. The draft guidance highlights that using AI to process personal data can increase known security risks. For instance, the ICO notes that the large amounts of personal data often needed to train AI systems increase the potential for loss or misuse of such data. In addition, the complexity of AI systems, which often rely heavily on third-party code and/or relationships with suppliers, introduces new potential for security breaches and software vulnerabilities. The draft guidance includes information on the types of attacks to which AI systems are likely to be particularly vulnerable and the types of security measures controllers should consider implementing to guard against such attacks. For example, the security measures recommended by the ICO to protect AI systems include: subscribing to security advisories to receive alerts of vulnerabilities; assessing AI systems against external security certifications or schemes; monitoring API requests to detect suspicious activity; and regularly testing, assessing and evaluating the security of both in-house and third-party code (e.g., through penetration testing). The draft guidance also suggests that applying de-identification techniques to training data could be appropriate, depending on the likelihood and severity of the potential risk to individuals.
  • Data Minimisation. Whilst the ICO recognises that large amounts of data are generally required for AI, it emphasises that the data minimisation principle will still apply, and AI systems should not process more personal data than is needed for their purpose. Further, whilst models may need to retain data for training purposes, any training data that is no longer required (e.g., because it is out of date or no longer predictively useful) should be erased.
  • The ICO highlights a number of techniques which could be used to ensure that AI models only process personal data that is adequate, relevant and limited to what is necessary. For example, removing features from a training data set that are not relevant to the purpose. In this context, the ICO emphasises that the fact that some data may later be found to be useful for making predictions is not sufficient to justify its inclusion in a training data set. The ICO also suggests a number of additional risk mitigation techniques, such as converting personal data into less “human readable formats” and making inferences locally via a model installed on a user’s own device, rather than this being hosted on a cloud server (for example, models for predicting what news content a user might be interested in could be run locally on their smartphone).

4. The exercise of individual rights

  • The draft guidance also addresses the specific challenges that AI systems pose to ensuring individuals have effective mechanisms for exercising their personal data rights.
  • Training Data. The ICO states that converting personal data into a different format does not necessarily take the data out of scope of data protection legislation. For example, pre-processing of data (transforming the data into values between 0 and 1) may make training data much more difficult to link to a particular individual, but it will still be considered personal data if it can be used to “single out” the individual it relates to (even if it cannot be associated with an individual’s name). The ICO states that in these circumstances, there is still an obligation to respond to individual rights requests.
  • Access, rectification and erasure. The draft guidance confirms that requests for access, rectification or erasure of training data should not be considered unfounded or excessive simply because they may be more difficult to fulfil (for example in the context of personal data contained in a large training data set). However, the ICO does clarify that there is no obligation to collect or maintain additional personal data just to enable the identification of individuals within a training data set for the sole purpose of complying with rights requests. Therefore, the draft guidance recognises that there could be times when it is not possible to identify an individual within a training data set and therefore it would not be possible to fulfil a request.
  • The draft guidance highlights that, in practice, the right to rectification is more likely to be exercised in the context of AI outputs, i.e., where an inaccurate output affects the individual. However, the ICO clarifies that predictions cannot be inaccurate where they are intended as prediction scores, not statements of fact. Therefore, in these cases, as personal data is not inaccurate, the right to rectification will not apply.
  • Portability. The draft guidance clarifies that whilst personal data used to train a model is likely to be considered to have been “provided” by the individuals and therefore subject to the right to data portability, pre-processing methods often significantly change the data from its original form. In cases where the transformation is significant, the ICO states that the resulting data may no longer count as data “provided” by the individual and would therefore not be subject to data portability (although it will still constitute personal data and be subject to other rights). Further, the draft guidance confirms that the outputs of AI models, such as predictions and classifications about individuals would also be out of scope of the right to data portability.
  • Right to be informed. Individuals should be informed if their personal data is going to be used to train an AI system. However, the ICO recognises that where a data set has been stripped of personal identifiers and contact addresses, it may be impossible or involve disproportionate effort to provide the information directly to individuals. In these cases the ICO states that other appropriate measures should be taken, for example, providing public information including an explanation of where the data was obtained.
  • Solely automated decisions with legal or similar effect. The draft guidance sets out specific steps that should be taken to fulfil rights related to automated decision making. For example, the system requirements needed to allow meaningful human review should be taken into account from the design phase onwards and appropriate training and support should be provided to human reviewers, with the authority to override an AI system’s decision if necessary. The draft guidance also emphasises that the process for individuals to exercise these rights must be simple and user-friendly. For example, if the result of a solely automated decision is communicated via a website, the page should contain a link or clear information allowing the individual to contact staff who can intervene. In addition, the draft guidance provides explanations on the difference between solely automated and partly automated decision-making and stresses the role of active human oversight; in particular, controllers should note that if human reviewers routinely agree with an AI system’s outputs and cannot demonstrate that they have genuinely assessed them, their decisions may effectively be classed as solely automated under the GDPR.

What should organisations do now?

While the draft guidance is not yet in final form, it nevertheless providers an indication of the ICO’s current thinking and the steps it will expect organisations to take to mitigate the privacy risks AI presents.

It will therefore be important to follow the development of the draft guidance carefully. In addition, at this stage it would be prudent to review how you currently develop and deploy AI systems and how you process personal data in this context to help you prepare for when the draft guidance is finalised. Some practical steps to take at this stage include:

  • Reviewing existing accountability and governance frameworks around your use of AI models, including your current approach to DPIAs in this context. In particular, DPIAs for existing projects or services may need to be conducted or updated, and risk mitigation measures identified, documented and implemented;
  • Considering your current approach to developing, training and deploying AI models and how you will demonstrate compliance with the core data protection principles, particularly the requirements of fairness, lawfulness, transparency, and data minimisation;
  • Reviewing the security measures, you currently employ to protect AI systems, and updating these if necessary, depending on level of risk; and
  • Ensuring you have appropriate policies and processes for addressing data subjects’ rights in the AI context, including in relation to solely automated decision-making.

Next steps

  • The ICO is currently running a public consultation on the draft guidance and has specifically requested feedback from technology specialists such as data scientists and software developers, as well as DPOs, general counsel and risk managers. The consultation will be open until 1 April 2020.

Explaining AI Decisions

On 20 May 2020, the Information Commissioner’s Office (“ICO”) published new guidance, Explaining decisions made with AIThis follows the draft guidance published in December 2019 and the subsequent consultation. The guidance was created by the ICO in conjunction with The Alan Turing Institute and the ICO says that its aim is to help organisations explain their processes, services and decisions delivered or assisted by AI to those who are affected by them. The explainability of AI systems has been the subject matter of Project ExplAIn, a collaboration between the ICO and The Alan Turing Institute. It should be noted that the guidance is not a statutory code of practice under the UK Data Protection Act 2018 and the ICO points out that it is not intended as comprehensive guidance on data protection compliance. Rather, the ICO views this as practical guidance, setting out what it considers is good practice for explaining decisions that have been made using AI systems that process personal data.

In this article we summarise some of the key aspects of the new guidance.

Guidance in three parts

The guidance is not short (c.130 pages in total) and it is divided into three Parts:

1The basics of explaining AI

2. Explaining AI in practice

3. What explaining AI means for your organisation

The basics of explaining AI

Part 1 (The basics of explaining AI) covers some of the basic concepts (e.g. What is AI? What is an output or an AI-assisted decision? How is an AI-assisted decision different to one made only by a human?) and provides an overview of the relevant legal framework to the concept of explainability. The overview focusses on data protection laws (e.g. the General Data Protection Regulation (“GDPR”) and the UK Data Protection Act 2018) but also explains the relevance of, for example, the Equality Act 2010 (in relation to decisions that may be discriminatory), judicial review (in relation to government decisions), and sector-specific laws that may also require some explainability of decisions made or assisted by AI (for example, financial services legislation which may require customers to be provided with information about decisions concerning applications for products such as loans or credit).

Part 1 of the guidance sets out six ‘main’ types of explanation that the ICO/The Alan Turing Institute have identified for explaining AI decisions. These are: rationale explanation, responsibility explanation, data explanation, fairness explanation, safety and performance explanation, and impact explanation. The guidance sets out the types of information to be included in each type of explanation. It also draws a distinction between what it calls processed-based vs outcome-based explanations (which apply across all of the six explanation types identified in the guidance). Processed-based explanations of AI systems explain the good governance processes and practices followed throughout the design and use of the AI system. Outcome-based explanations clarify the results of a decision, for example, the reason why a certain decision was reached by the AI system, using plain, easily understandable and everyday language.

The guidance also sets out five contextual factors that it says may apply when constructing an explanation for an individual. These contextual factors were the results of research carried out by the ICO/The Alan Turing Institute. The guidance says that these factors can be used to help decide what type of explanation someone may find most useful. The factors are: (1) domain factor (i.e. the domain or sector in which the AI system is deployed); (2) impact factor (i.e. the effect an AI decision has on an individual or society); (3) data factor (i.e. the type of data used by an AI model may impact an individual’s willingness to accept or contest a decision); (4) urgency factor (i.e. the importance of receiving an explanation quickly); and (5) audience factor (i.e. who or which groups of individuals are decisions made about, which may help to determine the type of explanation that is chosen).

Part 1 also sets out four key principles that organisations should think about when developing AI systems in order to ensure that AI decisions are explainable: (1) Be transparent; (2) Be accountable; (3) Consider the context in which the AI will operate; and (4) Reflect on impacts of the AI system on individuals and society.

Explaining AI in practice

Part 2 (Explaining AI in practice) is practical and more technical in nature. It sets out six ‘tasks’ that can be followed in order to assist with the design and deployment of appropriately explainable AI systems. The guidance provides an example of how these tasks could be applied in a particular case in the health sector. The tasks include: collecting and pre-processing data in an ‘explanation-aware’ manner, building your AI system in a way that relevant information can be extracted, and translating the logic of the AI system’s results into easy-to-understand reasons.

What explaining AI means for your organisation

Part 3 (What explaining AI means for your organisation) focusses on the various roles, policies, procedures and documentation that organisations should consider implementing to ensure that they are in a position to provide meaningful explanations about their AI systems.

This part of the guidance covers the roles of the product manager (i.e. the person that defines the requirements of the AI system and determines how it should be managed, including the explanation requirements), the ‘AI development team’ (which includes the people involved with collecting and analysing data that will be inputted into the AI system, with building, training and optimising the models that will be deployed in the AI system, and with testing the AI system), the compliance team (which includes the Data Protection Officer, if one is designated), and senior management and other key decisions makers within an organisation. The guidance suggest that senior management should get assurances from the product manager that an AI system being deployed by an organisation provides the appropriate level of explanation to individuals affected by AI-based decisions.

Regulators focus on explainability and transparency

As the use and development of AI continues to expand, the ICO has shown that it will be proactive in making sure that usage of the technology aligns to existing privacy legislation and other protections for individuals. In addition to this new guidance, the ICO recently consulted on new draft Guidance on the AI auditing framework. That guidance provides advice on how to understand data protection law in relation to AI and gives recommendations for technical and organisational measures that can be implemented to mitigate the risks that the use of AI may pose to individuals.

The ICO is not the only regulator that sees the importance of transparency and explainability to AI systems. In February 2020, the Financial Conduct Authority (“FCA”) announced a year-long collaboration with The Alan Turing Institute that will focus on AI transparency in the context of financial services. The FCA acknowledges that, along with all of the potential positives that come from the use of AI in financial services, the deployment of AI raises some important ethical and regulatory questions. It considers that transparency is a key tool for reflecting on those questions and thinking about strategies to address them.

Along with announcing its collaboration, the FCA also set out a high-level framework for thinking about AI transparency in financial markets which operates around four guiding questions: (1) Why is transparency important? (2) What types of information are relevant? (3) Who should have access to these types of information? (4) When does it matter?

More information about the ICO’s Guidance on the AI auditing framework and on the FCA’s transparency initiatives is available here.

The number of regulatory announcements and publications that have already taken place or are expected in 2020 shows the level of scrutiny that regulators and lawmakers are giving AI and the seriousness with which they regard its benefits and the issues that may arise from its use. It also indicates the speed at which this technology is being deployed and at which regulators are working to keep up with it.

Privacy Statement

Last Updated 5 Dec 2019

This privacy notice tells you about the information we collect from you when you submit an enquiry to us via our website. In collecting this information, we are acting as a data controller and, by law, we are required to provide you with information about us, about why and how we use your data, and about the rights you have over your data.

Who are we?

We are NT Business Consulting and Training Company. Our address is 9 Lorong Meringin 1/1 Bukit Meringin,43000 Kajang, Selangor. You can contact us by post at the above address, by email at [email protected] or by telephone on +60196609402

We are not required to have a data protection officer, so any enquiries about our use of your personal data should be addressed to the contact details above.

What personal data do we collect?

When you submit an enquiry to us, we ask you for your name, your email address and a brief description of your enquiry.

Why do we collect this information?

We will use your information to respond to your enquiry and hopefully to provide you with the information you need. We do this in order to take steps at your request prior to entering into a contract i.e., as part of pre-sales activity.

What do we do with your information?

Your information is stored in our Customer Relationship Management (CRM) system which is hosted by Microsoft. We have a contractual agreement with Microsoft that commits them to providing an appropriate level of safeguards for your personal data. It is not sent outside of the Malaysia.

We will read your message and normally respond to you either via telephone or via email. We may ask for your consent to retain your information in our CRM system in order to send you further information that we think may be of interest to you.

We will not use the information to make any automated decisions that might affect you.

How long do we keep your information for?

Your enquiry is kept in our CRM system for one month unless you give us your consent to send you marketing information on an ongoing basis, in which case we will keep it for as long as you continue to consent.

Your rights over your information

By law, you can ask us what information we hold about you, you can see it, and you can ask us to correct it if it is inaccurate.

You can also ask for it to be erased and you can ask for us to give you a copy of the information.

You can also ask us to stop using your information at any time, either by clicking the unsubscribe link at the end of any email communication, or by emailing, writing or telephoning us using the contact details above.

Your right to complain

If you have a complaint about our use of your information, you can contact the Information Commissioner’s Office via their website at www. www.pdp.gov.my

or write to them at:

Department Of Personal Data Protection

Level 6, Ministry of Communications and Multimedia Complex,
Lot 4G9, Persiaran Perdana, Precinct 4 Federal Government Administrative Center
62100 Putrajaya, Malaysia.

Tel:    03-8000 8000
Fax:   03-8911 7959
Email: aduan @ pdp.gov.my

Cookies Statement`

Last Updated 5 Dec 2019

What are cookies?

A cookie is a text file that is stored on your computer or mobile device by a website that you access. It will contain some anonymous information such as a unique identifier and the site name and some digits and numbers. It allows a website to remember things like your preferences or what’s in your shopping basket.

Cookies do lots of different things, like letting you navigate between pages efficiently, storing your preferences, and generally improving your experience of a website. Cookies make the interaction between you and the website faster and easier. They allow a website to identify that you are a new visitor every time you move to a new page on the site – for example, when you enter your login details and move to another page it won’t recognise you and it won’t be able to keep you logged in.

By using and browsing the NT Business Consulting and Training website, you consent to cookies being used in accordance with our policy. If you do not consent, you must turn off cookies or refrain from using the site.

Most browsers allow you to turn off cookies. To do this, look at the ‘help’ menu on your browser. Switching off cookies may restrict your use of the website and/or delay or affect the way in which it operates.

What types of cookies are there?

Broadly, there are 3 types of cookies:

Third party cookies

Third party cookies are set by another website. For example, we use a third-party analytics company who set their own cookie to perform this service.

Session cookies

Session Cookies are stored only temporarily during a browsing session and are deleted from the user’s device when the browser is closed.

Persistent cookies

Persistent cookies are saved on your computer for a fixed period (usually a year or longer) and are not deleted when the browser is closed. They are used where we need to know who you are for more than one browsing session. For example, we use this type of cookie to store your preferences, so that they are remembered for the next visit.

They are used on the main NT’s website, Trainee ePortfolio, Revalidation ePortfolio, Research Ready, Self Service Area and Online learning sites.

Why does NT use cookies?

To provide personalised services to individual users. and to improve your user experience by enabling the website to ‘remember’ you, either for the duration of your visit (using a ‘session cookie’) or for repeat visits (using a persistent cookie’).

To help us to monitor and improve the services we offer.

Managing Cookies

For more information about cookies and managing them, please visit aboutcookies.org.uk, which includes, information about how they can be disabled in most commonly used browsers. However, you should be aware that disabling certain cookies may cause the website not to function properly.

How to control cookies

The NT shop pages provide further information about processing of data collected via online ordering.

Processing and retention of personal data

All personal data is processed in accordance with the Malaysian data protection laws and regulation. We will not pass any of your personal data to outside organisations or individuals without your express consent.

You have a right to know about the personal information NT holds about you, and to have your data corrected or deleted.

What cookies does the NT Business and Training use?

Cookie namePurpose
Strictly necessary cookies   ASP.NET_SessionId ASPXAUTH             ARRAffinity                 civicAllowCookies      This is used across the whole website if a user is logged in. This ensures that a user stays logged in if they move between pages on the website within one session.     This cookie is set by websites run on the Windows Azure cloud platform. It is used for load balancing to make sure the visitor page requests are routed to the same server in any browsing session.     Cookie Control user interface to remember your preference.  
Performance cookies   __utma / __utmb / __utmc / __utmz / __gads /_ga / _gid / _gat             SC_ANALYTICS_GLOBAL_COOKIE         _hjIncludedInSample            NT Business Consulting and Training uses a tool called Google Analytics to give us statistical data on the performance of our website. These cookies are placed by Google Analytics.   This cookie name is set by the Sitecore Content Management System as is used for web analytics to identify repeat visits by unique users. We run Hotjar to gather data on where users are clicking on pages, how they are viewing single and multiple pages, and to ask simple polls. This session cookie is set to let Hotjar know whether that visitor is included in the sample which is used to generate funnels
Functional cookies     __atuvc                               Targeting and advertising cookies   NT Business Consulting and Training sometimes use 3rd party agencies to advertise and bring traffic to certain parts of the site, for example where we have a specific campaign. These cookies are placed by one of our 3rd-party agencies to allow them to track the activities of visitors that that have come to the site from their adverts   NT-cookiewarning    The __atuvc cookie is created and read by AddThis’s JavaScript on the client side in order to make sure the user sees the updated count if they share a page and return to it before our share count cache is updated. No data from that cookie is sent back to AddThis and removing it when disabling cookies would cause unexpected behaviour for users. AddThis is a content sharing and social insights platform.                               This cookie is used to record if a user has accepted the use of cookies on this website. Cookies set by other websites through this site          We want to provide interesting and engaging content on our website. On a number of pages, we embed media such as YouTube videos. The suppliers of these services may also set cookies on your device when you visit the pages where we have used this type of content. These are known as ‘third-party’ cookies. NT does not control how a third party uses their cookies. You should check these third-party websites’ privacy policies for more information about their cookies if you are concerned about this.  

Contact

We can be contacted by email and post at the following address:

NT Business Consulting and Training

9 Lorong Meringin

1/1 Bukit Meringin

43000, Kajang

Selangor.

Email: [email protected]

Tel:    +60196609402