Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Russia, Nokia, and SORM

Photo by Mati Mango on Pexels.com

The New York Times recently wrote about Nokia providing telecommunications equipment to Russian ISPs, all while Nokia was intimately aware of how its equipment would be interconnected with System for Operative Investigative Activities (SORM) lawful interception equipment. SORM equipment has existed in numerous versions since the 1990s. Per James Lewis:

SORM-1 collects mobile and landline telephone calls. SORM-2 collects internet traffic. SORM-3 collects from all media (including Wi-Fi and social networks) and stores data for three years. Russian law requires all internet service providers to install an FSB monitoring device (called “Punkt Upravlenia”) on their networks that allows the direct collection of traffic without the knowledge or cooperation of the service provider. The providers must pay for the device and the cost of installation.

SORM is part of a broader Internet and telecommunications surveillance and censorship regime that has been established by the Russian government. Moreover, other countries in the region use iterations or variations of the SORM system (e.g., Kazakhstan) as well as countries which were previously invaded by the Soviet Union (e.g., Afghanistan).

The Time’s article somewhat breathlessly states that the documents they obtained, and which span 2008-2017,

show in previously unreported detail that Nokia knew it was enabling a Russian surveillance system. The work was essential for Nokia to do business in Russia, where it had become a top supplier of equipment and services to various telecommunications customers to help their networks function. The business yielded hundreds of millions of dollars in annual revenue, even as Mr. Putin became more belligerent abroad and more controlling at home.

It is not surprising that Nokia, as part of doing business in Russia, was complying with lawful interception laws insofar as its products were compatible with SORM equipment. Frankly it would have been surprising if Nokia had flouted the law given that Nokia’s own policy concerning human rights asserts that (.pdf):

Nokia will provide passive lawful interception capabilities to customers who have a legal obligation to provide such capabilities. This means we will provide products that meet agreed standards for lawful intercept capabilities as defined by recognized standards bodies such as the 3rd Generation Partner Project (3GPP) and the European Telecoms Standards Institute (ETSI). We will not, however, engage in any activity relating to active lawful interception technologies, such as storing, post-processing or analyzing of intercepted data gathered by the network operator.

It was somewhat curious that the Times’ article declined to recognize that Nokia-Siemens has a long history of doing business in repressive countries: it allegedly sold mobile lawful interception equipment to Iran circa 2009 and in 2010-11 its lawful interception equipment was implicated in political repression and torture in Bahrain. Put differently, Nokia’s involvement in low rule-of-law countries is not new and, if anything, their actions in Russia appear to be a mild improvement on their historical approaches to enabling repressive governments to exercise lawful interception functionalities.

The broad question is whether Western companies should be authorized or permitted to do business in repressive countries. To some extent, we might hope that businesses themselves would express restraint. But, in excess of this, companies such as Nokia often require some kind of export license or approval before they can sell certain telecommunications equipment to various repressive governments. This is particularly true when it comes to supplying lawful interception functionality (which was not the case when Nokia sold equipment to Russia).

While the New York Times casts a light on Nokia the article does not:

  1. Assess the robustness of Nokia’s alleged human rights commitments–have they changed since 2013 when they were first examined by civil society? How do Nokia’s sales comport with their 2019 human rights policy? Just how flimsy is the human rights policy in its own right?
  2. Assess the export controls that Nokia was(n’t) under–is it the case that the Norwegian government has some liability or responsibility for the sales of Nokia’s telecommunications equipment? Should there be?
  3. Assess the activities of the telecommunications provider Nokia was supplying in Russia, MTS, and whether there is a broader issue of Nokia supplying equipment to MTS since it operates in various repressive countries.

None of this is meant to set aside the fact that Western companies ought to behave better on the international stage. But…this has not been a priority in Russia, at least, until the country’s recent war of aggression. Warning signs were prominently on display before this war and didn’t result in prominent and public recriminations towards Nokia or other Western companies doing business in Russia.

All lawful interception systems, regardless of whether they conform with North America, European, or Russian standards, are surveillance systems. Put another way, they are all about empowering one group to exercise influence or power over others who are unaware they are being watched. In low rule-of-law countries, such as Russia, there is a real question as to whether they should should even be called ‘lawful interception systems’ as opposed to explicitly calling them ‘interception systems’.

There was a real opportunity for the New York Times to both better contextualize Nokia’s involvement in Russia and, then, to explain and problematize the nature of lawful interception capability and standards. The authors could also have spent time discussing the nature of export controls on telecommunications equipment, where the equipment is being sold into repressive states. Sadly this did not occur with the result that the authors and paper declined to more broadly consider and report on the working, and ethics and politics, of enabling telecommunications and lawful interception systems in repressive and non-repressive states alike. While other kicks at this can will arise, it’s evident that there wasn’t even an attempt to do so in this report on Nokia.

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎
Link

Ontario’s Path Towards Legitimizing Employee Surveillance

Earlier this week, the Ontario government declared that it would be introducing a series of labour reforms. As part of these reforms, employers will be required to inform their employees of how they are being electronically monitored. These requirements will be applied to all employers with 25 or more employees.

Employers already undertake workplace surveillance, though it has become more common and extensive as a result of the pandemic. Where surveillance is undertaken, however, businesses must seek out specialized counsel or services to craft appropriate labour policies or contracting language. This imposes costs and, also, means that different firms may provide slightly different information. The effect is that employers may be more cautious in what surveillance they adopt and be required to expend funds to obtain semi-boutique legal opinions.

While introducing legislation would seem to extend privacy protections for employees, as understood at the moment the reforms will only require a notification to employees of the relevant surveillance. It will not bar the surveillance itself. Further, with a law on the books it will likely be easier for Ontario consulting firms to provide pretty rote advice based on the legislative language. The result, I expect, will be to drive down the transaction costs in developing workplace surveillance policies at the same time that workplace surveillance technologies become more affordable and extensively deployed.

While I suspect that many will herald this law reform as positive for employees, on the basis that at least now they will know how they are being monitored, I am far less optimistic. The specificity of notice will matter, a lot, and unless great care is taken in drafting the legislation employers will obtain a significant degree of latitude in the actual kinds of intrusive surveillance that can be used. Moreover, unless required in legislative language, we can expect employers to conceal the specific modes of surveillance on grounds of needing to protect the methods for operational business reasons. This latter element is of particular concern given that major companies, including office productivity companies like Microsoft, are baking extensive workplace surveillance functionality into their core offerings. Ontario’s reforms are not, in fact, good for employees but are almost certain to be a major boon for their employers.

Link

‘Efficiency’ and Basic Rights

Rest of the World has published a terrific piece on the state of surveillance in Singapore, where governmental efficiency drives technologies that are increasingly placing citizens and residents under excessive and untoward kinds of surveillance. The whole piece is worth reading, but I was particularly caught by a comment made by the deputy chief executive of the Cyber Security Agency of Singapore:

“In the U.S., there’s a very strong sense of building technology to hold the government accountable,” he said. “Maybe I’m naive … but I just didn’t think that was necessary in Singapore.

Better.sg, which has around 1,000 members, works in areas where the government can’t or won’t, Keerthi said. “We don’t talk about who’s responsible for the problem. We don’t talk about who is responsible for solving the problem. We just talk about: Can we pivot this whole situation? Can we flip it around? Can we fundamentally shift human behaviour to be better?” he said. 

… one app that had been under development was a ‘catch-a-predator’ chatbot, which parents would install on their childrens’ [sic] phones to monitor conversations. The concept of the software was to goad potential groomers into incriminating themselves, and report their activity to the police. 

“The government’s not going to build this. … It is hostile, it is almost borderline entrapment,” Keerthi said, matter-of-factly. “Are we solving a real social problem? Yeah. Are parents really thrilled about it? Yeah.”

It’s almost breathtaking to see a government official admit they want to develop tools that the government, itself, couldn’t create for legal reasons but that he hopes will be attractive to citizens and residents. While I’m clearly not condoning the social problem that he is seeking to solve, the solution to such problems should be within the four corners of law as opposed to outside of them. When government officials deliberately move outside of the legal strictures binding them they demonstrate a dismissal of basic rights and due process with regards to criminal matters.

While such efforts might be ‘efficient’ and normal within Singapore they cannot be said to conform with basic rights nor, ultimately, with a political structure that is inclusive and responsive to the needs of its population. Western politicians and policy wonks routinely, and wistfully, talk about how they wish they were as free to undertake policy experiments and deployments as their colleagues in Asia. Hopefully more of them will read pieces like this one to understand that the efficiencies they are so fond of would almost certainly herald the end of the very democratic systems they operate within and are meant to protect.

Link

Operation Fox Hunt

(Photo by Erik Mclean on Pexels.com)

ProPublica’s Sebastian Rotella and Kirsten Berg have an outstanding piece on the Chinese government’s efforts to compel individuals to return to China to face often trumped up charges. Efforts include secretly sending Chinese officials into the United States to surveil, harass, intimidate, and stalk residents of the United States, and also imprisoning or otherwise threatening residents’ family member who have remained in China.

Many of the details in the article are the result of court records, interviews, and assessments of Chinese media. It remains to be seen whether Chinese agents’ abilities to conduct ‘fox hunts’ will be impeded now that the US government is more aware of these operations. Given the attention and suspicion now cast towards citizens of China, however, there is also a risk that FBI agents may become overzealous in their investigations to the detriment of law-abiding Chinese-Americans or visitors from China.

In an ideal world there would be equivalent analyses or publications on the extent to which these operations are also undertaken in Canada. To date, however, there is no equivalent to ProPublica’s piece in the Canadian media landscape and given the Canadian media’s contraction we can’t realistically expect anything, anytime soon. However, even a short piece which assessed whether individuals from China who’ve run operations in the United States, and who are now barred from entering the US or would face charges upon crossing the US border, are similarly barred or under an extradition order in Canada would be a positive addition to what we know of how the Canadian government is responding to these kinds of Chinese operations.

Quote

How we measure changes not only what is being measured but also the moral scaffolding that compels us to live toward those standards. Innovations like assembly-line factories would further extend this demand that human beings work at the same relentlessly monotonous rate of a machine, as immortalized in Charlie Chaplin’s film Modern Times. Today, the control creep of self-tracking technologies into workplaces and institutions follows a similar path. In a “smart” or “AI-driven” workplace, the productive worker is someone who emits the desired kind of data — and does so in an inhumanly consistent way.


Sun-ha Hong, “Control Creep: When the Data Always Travels, So Do the Harms

The Failure to Frame Covid-19 Mobility Data

(Photo by Gabriel Meinert on Unsplash)

For the past year, the Toronto Star has repeatedly run articles that take mobility data from mobile device advertisers, to then assess the extent to which Torontonians are moving too much. Reporting has routinely shown how people are moving more or less frequently, with articles often suggesting that people are moving too much when they’re supposed to be staying put.

The problem? The ways in which ‘too much’ is assessed runs contrary to public health advice and lacks sufficient nuance to inform the public. In the most recent reporting, we find that:

Between Jan. 18 and Feb. 28, average mobility across Ontario increased from 58 per cent to 65 per cent, according to the marketing firm Environics Analytics. Environics defines mobility as a percentage of residents 15 or older who travelled 500 metres or more beyond their home postal code.

To be clear: in Ontario the provincial and local public health leaders have strongly stated that people should get outside and exercise. That can involve walking or other outdoor activities. Those activities are not supposed to be restricted to 500 metres from your home, which was advice that was largely provided in more restrictive lockdowns in European countries. And we know that mobility data is often higher in areas with higher percentages of BIPOC residents because they tend to have lower-paying jobs and must travel further to reach their places of employment.

As has become the norm, the fact that people have moved around more frequently as (admittedly ineffective) restrictions have been raised, and that people are ‘region hopping’ by going from more restricted zones to less restricted ones, is being tightly associated with personal or individual failures. From a quoted expert, we find that:

“It shows that once things start to open, people just seem to do whatever, and that’s a recipe for disaster.”

I would suggest that what we are seeing is a pent up, pretty normal, human response: the provincial government has behaved erratically and you have some people racing around to get stuff done before returning to another (ineffective) set of restrictions, and a related set of people who believe that if the government is letting them move around then things must be comparatively safer. To put it another way, in the former case you have people behaving rationally (if, in some eyes, selfishly) whereas in the latter you have a failure by government to solve a collective action problem by downloading responsibility to individuals. In both cases you are seeing an uptick in behaviour which is suggestive that they believe it’s safer to do things, now, than weren’t before when the government assumed some responsibility and signalled that moving was less safe and actively discouraged it by keeping businesses and other ‘fun’ things shut down.

Throughout the pandemic response in Ontario, what has been evident is that the provincial government simply cannot develop and implement effective policies to mitigate the spread of the pandemic. The result of muddling through things has been that the public, and especially small business, has suffered extraordinarily whilst the gains have been meagre. The lack of paid sick leave, as an example, has seriously stymied the ability of lower-income workers to actually keep themselves apart from others while they wait for diagnoses and, if positive, recover from their infections.

To be fair, the Toronto Star and other outlets have covered paid sick leave issues, along with lots of other failures by the provincial government in its handling of the pandemic. And there is certainly some obligation on individuals to best adhere to public health advice. But we’ve long known these are collective action problems: there is a need to move beyond downloading responsibility to individuals and for governments to behave effectively, coherently, and accountably throughout major crises. The provincial government has failed, and continues to fail, on every one of these measures to the effect that individuals are responding to the past, present, and expected future actions of the government: more unpredictability and more restrictions on their daily lives as a result of government ineptitude.

Whereas the journalists could have cast what Ontarians are doing as a semi-natural response to the aforementioned government failings, instead those individuals are being castigated. We shouldn’t be blaming the victims of the pandemic, but I guess that’s what happens when assessing mobility data.

Link

Privacy and Contemporary Motor Vehicles

Writing for NBC News, Olivia Solon provides a useful overview of just how much data is collected by motor vehicles—using sensors embedded in the vehicles as well as collected by infotainment systems when linked with a smartphone—and how law enforcement agencies are using that information.

Law enforcement agencies have been focusing their investigative efforts on two main information sources: the telematics system — which is like the “black box” — and the infotainment system. The telematics system stores a vehicle’s turn-by-turn navigation, speed, acceleration and deceleration information, as well as more granular clues, such as when and where the lights were switched on, the doors were opened, seat belts were put on and airbags were deployed.

The infotainment system records recent destinations, call logs, contact lists, text messages, emails, pictures, videos, web histories, voice commands and social media feeds. It can also keep track of the phones that have been connected to the vehicle via USB cable or Bluetooth, as well as all the apps installed on the device.

Together, the data allows investigators to reconstruct a vehicle’s journey and paint a picture of driver and passenger behavior. In a criminal case, the sequence of doors opening and seat belts being inserted could help show that a suspect had an accomplice.

Of note, rental cars as well as second hand vehicles also retain all of this information and it can then be accessed by third-parties. It’s pretty easy to envision a situation where rental companies are obligated to assess retained data to determine if a certain class or classes of offences have been committed, and then overshare information collected by rental vehicles to avoid their own liability that could follow from failing to fully meet whatever obligations are placed upon them.

Of course, outright nefarious actors can also take advantage of the digital connectivity built into contemporary vehicles.

Just as the trove of data can be helpful for solving crimes, it can also be used to commit them, Amico said. He pointed to a case in Australia, where a man stalked his ex-girlfriend using an app that connected to her high-tech Land Rover and sent him live information about her movements. The app also allowed him to remotely start and stop her vehicle and open and close the windows.

As in so many different areas, connectivity is being included into vehicles without real or sufficient assessment of how to secure new technologies and defray harmful or undesirable secondary uses of data. Engineers rarely worry about these outcomes, corporate lawyers aren’t attentive to these classes of issues, and the security of contemporary vehicles is generally garbage. Combined, this means that government bodies are almost certainly going to expand the ranges of data they can access without having to first go through a public debate about the appropriateness of doing so or creation of specialized warrants that would limit data mining. Moreover, in countries with weak policing accountability structures, it will be impossible to even assess the regularity at which government officials obtain access to information from cars, how such data lets them overcome other issues they state they are encountering (e.g., encryption), or the utility of this data in investigating crimes and introducing it as evidence in court cases.

Link

Links for December 7-11, 2020

Links for December 7-11, 2020

  • Frustrating the state: Surveillance, public health, and the role of civil society || “…surveillance in times of crisis poses another threat. By granting states unfettered power through emergency orders, data collected through digital surveillance could be shared across agencies and used for purposes beyond the original intention of fighting COVID-19. In states where democratic backsliding has been underway, surveillance could be used to deter dissent and silence government critics. According to Verisk Maplecroft, a risk consultancy firm, Asia is now the highest risk region in both their “Right to Privacy” and “Freedom of Opinion and Expression” indices as “strongmen” in Asia capitalize on the pandemic.” // Surveillance is, almost by its nature, inequitable and the potential harms linked with pandemic surveillance are neither novel nor unforeseeable.
  • Rebecca Solnit: On not meeting nazis halfway || “… the truth is not some compromise halfway between the truth and the lie, the fact and the delusion, the scientists and the propagandists. And the ethical is not halfway between white supremacists and human rights activists, rapists and feminists, synagogue massacrists and Jews, xenophobes and immigrants, delusional transphobes and trans people. Who the hell wants unity with Nazis until and unless they stop being Nazis?”
  • Instagram’s latest middle finger || “…Instagram is now nearly completely unrecognizable from the app that I fell in love with. The feed of images is still key, but with posting now shoved into a corner, how long until that feed becomes a secondary part of the service?” // Cannot agree more.
  • The Epicenter // The storytelling for this piece on the experiences of the Covid-19 outbreak is poorer areas of New York by the NYT is simultaneously beautiful and heartbreaking.
  • Poor security at online proctoring company may have put student data at risk || “Kumar, CEO of Proctortrack’s parent company Verificient, says students have “valid concerns” and that he sympathizes with their discomfort. Proctoring software is “intrusive by nature” he says, but “if there’s no proctoring solution, institutions will have to totally change how they provide exams. Often you can’t do that given the time and limitations we have.”” // Justifying producing a gross product on the basis that if you didn’t other organizations would have to behave more ethically is a very curious, and weird, way of defending your company’s very existence.
  • China rethinking its role || “China’s use of war memory to shape its international position has been much less effective overseas than it has at home. However, the significance of its efforts is real, and may become more effective over time. China wants to create a global narrative around itself which shares a common understanding of the modern world – the idea that 1945 is the beginning of the current order – but places China at the heart of the creation and management of that order. The narrative had more power during an era when the US, anomalously, had a leader who cared little for the order shaped by America in Asia since 1945. Now that a president with a more long-range view of the role of the United States is about to take office, we may see something different again: two differing versions of what 1945 meant in Asia, as defined by Beijing and Washington – and the competition for moral standing that comes from the embrace of that legacy.” // This is a fascinating recounting of how China is re-interpreting activities undertaken by Nationalist forces during World War Two, today, to justify its efforts to be more assertive in the international order today. Like so much in China, understanding how narratives are built and their domestic and foreign rationales and perceived utility is critical to appreciate the country’s foreign policy ambitions, and those ambitions’ potentials and limitations.