My name is Rashe Mishra, I am a Digital Media, Arts, and Technology student in my senior year of undergrad at Penn State Behrend. For my research project in my Digital Project Production class, I am researching Data Privacy and how it affects the careers of designers, such as myself!
DIGIT 400: Tech Issue Research Guide
The significance of data privacy is slowly dwindling in the UI/UX design sphere in effort to advance the efficiency of UI/UX design towards users. By analyzing data privacy breach cases, building a guide on ethical design practices, and understanding the crucial role UI/UX designers play in maintaining data privacy, I would like to lay out how exactly designers can uphold the importance of data privacy.
Scroll To Explore!Before we get to why we need data privacy, let’s establish what data privacy is.
As the guide from Usercentrics GmbH outlines, the right an individual has to regulate how their information (data) is accumulated, utilized, and distributed, is data privacy. When this control is neglected, the outcome can be severe: identity theft or fraud for users, and regulatory, reputational or financial consequences for businesses.
As a designer, this means privacy must be integrated into the product and experience from the start, not considered as an after thought. Designers can arrange how consent is pursued, the representation of data flows, and also play a critical role in aligning user experience with regulatory demands and ethical expectations.
By embedding clarity, transparency and user-control into design, you can reduce user-friction and avoid that fine line that turns privacy failures into product-failures.
At its core, good design is flawless and undetectable. It can hardly be mentioned since it perfectly pairs with one’s neccessities. This raises the question, is human-centered design, ethical design? For design to be “good”, a certain amount of data is obtained for a product to get there. Unfortunately, this data can be breached unknowingly in the name of offering conveniance to a user. At the end of the day, we are USER experience designers…
We are the safeguard for data privacy and users.
But there aren’t laws, policies, or even frameworks to learn HOW to become this safeguard. From my research gathered on ethical design and data privacy articles, I have identified 4 key themes that designers can adopt into their design to make it ethical:
Back in 2006, Clive Humby, a British mathematician and entrepreneur in the field of data science and customer-centric business strategies spoke the words “Data is the new oil.” This quote has different meanings and interpretations; but we do know that this analogy implies data is an essential element when building a user-based product. A majority of our lives live in the digital world, our privacy relies on the security of data. Simultaneously, companies are fighting to tailor their products to consumers through the use of data. Now more than ever, data is a commodity.
So as designers, how do we cultivate both the perfect product while maintaining user privacy?
Let’s set an objective:
Implement privacy considerations during design-thinking for optimal product development!
That sounds like a mouthful, but worry not, we are following a well-oiled product development process practiced by industry professionals called: Privacy By Design.
You can read more about it here :)
To start, we must understand that one can’t take simple steps to get closer to the objective. So like a true designer, we will visualize our journey to the objective!
Instructions: Click on each step to toggle a pop-up box explaining the steps in the "Privacy By Design" process-map.
Identify essential data early and embed privacy-first thinking.
Evaluate risks and collaborate with stakeholders.
Collect only necessary data and prioritize transparency.
Integrate consent and explanatory features early.
Ensure users understand how their data is handled.
Verify your product meets privacy requirements.
Refine privacy features based on real-world feedback.
On February 1st, 2010, Microsoft renamed its former service “Windows Azure” as “Microsoft Azure” to indicate its capability to work as a public cloud platform.
Azure is a cloud computing platform from Microsoft that provides a wide range of cloud services, such as computing, analytics, storage, and networking, to help businesses build, deploy, and manage applications. To understand how this platform works, you can read this article from TechTarget.
Around 3 years ago, Microsoft decided to improve its artificial intelligence policies, this very AI integrates and powers Microsoft Azure. With these new policies, any and all companies that want to use this tool or are actively using it, must apply for Microsoft Azure. This is to ensure Microsoft’s AI ethics standards are being followed.
The decision to repair Azure was due to controversial features that enabled companies to understand gender, age, emotion through facial recognition technology. An example is their custom neural voice technology, which Microsoft restricted due to its ability to create synthetic voices that are identical to the original source. Read up on this technology here! Data that powers tools such as this can be violated as it requires personal bio data. Finding the boundary where a necessary amount of data is helpful until it starts violating privacy is crucial, a boundary Microsoft has come to understand, or have they?
Towards the end of September 2025, Microsoft was sent a joint letter from six civil society groups about their involvement with Israeli authorities. The letter points out several media reports of Microsoft’s AI and cloud capabilities being used to conduct mass surveillance of Palestinians and facilitate lethal airstrikes on civilians by the Israeli Defence Force (IDF).
11,500 terabytes of Palestinian phone calls were compiled by July 2025 through Azure-based surveillance systems. Now, the new system allows Unit 8200 intelligence officers to listen back to a larger pool of phone conversations of Palestinian civilians according to this Computer Weekly article.
Although the firm had “ceased and disabled a set of services” to Unit 8200, one must question the humanity of Microsoft to be able to sign off on an operation such as this in the first place knowing prior that an entity must apply to use such tools.
We can look to major tech giants to know what not to do, and what to do.
For example, Apple chose to make a decision as a company that put the user first. On April 26th, 2021, Apple implemented a feature in their iOS 14.5 update called “App Tracking Transparency”. This feature allows users to choose whether an app can track their activity on other apps and websites.
What makes this release so significant is how one of the largest tech companies in the world considered consumer privacy (the bare minimum, am I right…). Rather this feature being established as a “compliance requirement” (rules and regulations a company must follow to operate legally) it was posed as a user right. This feature was approved by many users, but also met with dislike from major platforms.
So what can we learn from a simple iOS release?
People care about their privacy. When given the choice to maintain that, people will choose not to be exploited. Data can be misused by big platforms, user data can be collected to improve algorithms just to push targeted advertisements to users (such as Facebook, who pushbacked on Apple’s release of App Tracking Transparency since it would decrease their ad revenue). This change has set a new standard in the digital age where data is a new commodity that can be mishandled like any other.