Link

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Link

Digital Currency Standards Heat Up

There is an ongoing debate as to which central banks will launch digital currencies, by which date, and how currencies will be interoperable with one another. Simon Sharwood, writing for The Register, is reporting that China’s Digital Yuan is taking big steps to answering many of those questions:

According to an account of the meeting in state-controlled media, Fan said standardization across payment systems will be needed to ensure the success of the Digital Yuan.

The kind of standardization he envisioned is interoperability between existing payment systems – whether they use QR codes, NFC or Bluetooth.

That’s an offer AliPay and WeChat Pay can’t refuse, unless they want Beijing to flex its regulatory muscles and compel them to do it.

With millions of payment terminals outside China already set up for AliPay and WeChat Pay, and the prospect of the Digital Yuan being accepted in the very same devices, Beijing has the beginnings of a global presence for its digital currency.

When I walk around my community I very regularly see options to use AliPay or WeChat Pay, and see many people using these options. The prospect that the Chinese government might be able to take advantage of existing payment structures to also use a government-associated digital fiat currency would be a remarkable manoeuvre that could theoretically occur quite quickly. I suspect that when/if some Western politicians catch wind of this they will respond quickly and bombastically.

Other governments’ central banks should, ideally, be well underway in developing the standards for their own digital fiat currencies. These standards should be put into practice in a meaningful way to assess their strengths and correct their deficiencies. Governments that are not well underway in launching such digital currencies are running the risk of seeing some of their population move away from domestically-controlled currencies, or basket currencies where the state determines what composes the basket, to currencies managed by foreign governments. This would represent a significant loss of policy capacity and, arguably, economic sovereignty for at least some states.

Why might some members of their population shift over to, say, the Digital Yuan? In the West this might occur when individuals are travelling abroad, where WeChat Pay and AliPay infrastructure is often more usable and more secure than credit card infrastructures. After using these for a while the same individuals may continuing to use those payment methods for ease and low cost when they return home. In less developed parts of the world, where AliPay and WeChat Pay are already becoming dominant, it could occur as members of the population continue their shift to digital transactions and away from currencies controlled or influenced by their governments. The effect would be, potentially, to provide a level of influence to the Chinese government while potentially exposing sensitive macro-economic consumer habits that could be helpful in developing Chinese economic, industrial, or foreign policy.

Western government responses might be to bar the use of the Digital Yuan in their countries but this could be challenging should it rely on common standards with AliPay and WeChat Pay. Could a ban surgically target the Digital Yuan or, instead, would it need to target all payment terminals using the same standard and, thus, catch AliPay and WeChat Pay as collateral damage? What if a broader set of states all adopt common standards, which happen to align with the Digital Yuan, and share infrastructure: just how many foreign and corporate currencies could be disabled without causing a major economic or diplomatic incident? To what extent would such a ban create a globally bifurcated (trifurcated? quadfurcated?) digital payment environment?

Though some governments might regard this kind of ‘burn them all’ approach as desirable there would be an underlying question of whether such an effect would be reasonable and proportionate. We don’t ban WeChat in the West, as an example, in part due to such an action being manifestly disproportionate to risks associated with the communications platform. It is hard to imagine how banning the Digital Yuan, along with WeChat Pay or AliPay or other currencies using the same standards, might not be similarly disproportionate where such a decision would detrimentally affect hundreds of thousands, or millions, of people and businesses that already use these payment systems or standards. It will be fascinating to see how Western central banks move forward to address the rise of digital fiat currencies and, also, how their efforts intersect with the demands and efforts of Western politicians that regularly advocate for anti-China policies and laws.

Aside

2022.9.18

A couple thoughts after shooting with the iPhone 14 Pro for a day, as an amateur photographer coming from an iPhone 12 Pro and who also uses a Ricoh GR and Fuji X100F.

  1. The 48 megapixel 24mm (equiv.) lens is nearly useless for street photography, when capturing images at 48 megapixels. It takes 1+ seconds to capture an image at this resolution. That’s not great for trying to catch a subject or scene at just the right moment. (To me, this says very, very bad things about what Apple Silicon can actually do.) Set the captured resolution to 12 megapixels in ProRAW if you’re shooting fast-moving or fast-changing subjects/scenes.
  2. The 78mm (equiv.) telephoto is pretty great. It really opens a new way of seeing the world for me. I also think it’s great for starting street photographers who aren’t comfortable being as close as 28mm or 35mm might require.
  3. The new form factor means the MagSafe-compatible battery I use doesn’t fit. Which was a pretty big surprise and leads into item 4…
  4. Capturing 48 megapixel images, at full resolution, while using your phone in bright daylight (and thus raising the screen to full brightness), absolutely destroys battery life. Which means you’re likely to need a battery pack to charge your phone during extended photoshoots. Make sure you choose one that’s the right size!
  5. I like the ability to use the photographic styles. But it really sucks that you can’t see what effect they’d have on monochrome/black and white images. I shoot 95-99% in monochrome; this is likely less of an issue for other folks.
  6. The camera app desperately needs an update and reorganization. It is kludgy and a pain in the ass to use if you need to change settings quickly on the street. Do. Not. Like. It’s embarrassing Apple continues to ship such a poor application.

I haven’t taken the phone out to shoot extensively at night, though some staged shots at home at night showcase how much better night mode is compared to that in the iPhone 12 Pro.

Anyway, early thoughts. More complete ones will follow in the coming week or so.

Tech for Whom?

Charley Johnson has a good line of questions and critique for any organization or group which is promoting a ‘technology for good’ program. The crux is that any and all techno-utopian proposals suggest a means of technology to solve a problem as defined by the party making the proposal. Put another way, these kinds of solutions do not tend to solve real underlying problems but, instead, solve the ‘problems’ for which hucksters have build a pre-designed a ‘solution’.

This line of analysis isn’t new, per se, and follows in a long line of equity, social justice, feminism, and critical theory writers. Still, Johnson does a good job in extracting key issues with techno-utopianism. Key, is that any of these solutions tend to present a ‘tech for good’ mindset that:

… frames the problem in such a way that launders the interests, expertise, and beliefs of technologists…‘For good’ is problematic because it’s self-justifying. How can I question or critique the technology if it’s ‘for good’? But more importantly, nine times out of ten ‘for good’ leads to the definition of a problem that requires a technology solution.

One of the things that we are seeing more commonly is the use of data, in and of itself, as something that can be used for good: data for good initiatives are cast as being critical to solving climate change, making driving safer, or automating away the messier parties of our lives. Some of these arguments are almost certainly even right! However, the proposed solutions tend to rely on collecting, using, or disclosing data—derived from individuals’ and communities’ activities—without obtaining their informed, meaningful, and ongoing consent. ‘Data for good’ depends, first and often foremost, on removing the agency to say ‘yes’ or ‘no’ to a given ‘solution’.

In the Canadian context efforts to enable ‘good’ uses of data have emerged through successively introduced pieces of commercial privacy legislation. The legislation would permit the disclosure of de-identified personal information for “socially beneficial purposes.” Information could be disclosed to government, universities, public libraries, health care institutions, organizations mandated by the government to carry out a socially beneficial purpose, and other prescribed entities. Those organizations could use the data for a purpose related to health, the provision or improvement of public amenities or infrastructure, the protection of the environment or any other prescribed purpose.

Put slightly differently, whereas Johnson’s analysis is towards a broad concept of ‘data for good’ in tandem with elucidating examples, the Canadian context threatens to see broad-based techno-utopian uses of data enabled at the legislative level. The legislation includes the ability to expand whom can receive de-identified data and the range of socially beneficial uses, with new parties and uses being defined by regulation. While there are a number of problems with these kinds of approaches—which include the explicit removal of consent of individuals and communities to having their data used in ways they may actively disapprove of—at their core the problems are associated with power: the power of some actors to unilaterally make non-democratic decisions that will affect other persons or communities.

This capacity to invisibly express power over others is the crux of most utopian fantasies. In such fantasies, power relationships are resolved in the absence of making them explicit and, in the process, an imaginary is created wherein social ills are fixed as a result of power having been hidden away. Decision making in a utopia is smooth and efficient, and the power asymmetries which enable such situations is either hidden away or just not substantively discussed.

Johnson’s article concludes with a series of questions that act to re-surface issues of power vis-a-vis explicitly raising questions of agency and the origin and nature of the envisioned problem(s) and solution(s):

Does the tool increase the self-determination and agency of the poor?

Would the tool be tolerated if it was targeted at non-poor people?

What problem does the tool purport to solve and who defined that problem?

How does the way they frame the problem shape our understanding of it?

What might the one framing the problem gain from solving it?

We can look to these questions as, at their core, raising issues of power—who is involved in determining how agency is expressed, who has decision-making capabilities in defining problems and solutions—and, through them, issues of inclusion and equity. Implicit through his writing, at least to my eye, is that these decisions cannot be assigned to individuals but to individuals and their communities.

One of the great challenges for modern democratic rule making is that we must transition from imagining political actors as rational, atomic, subjects to ones that are seen as embedded in their community. Individuals are formed by their communities, and vice versa, simultaneously. This means that we need to move away from traditional liberal or communitarian tropes to recognize the phenomenology of living in society, alone and together simultaneously, while also recognizing and valuing the tilting power and influence of ‘non-rational’ aspects of life that give life much of its meaning and substance. These elements of life are most commonly those demonized or denigrated by techno-utopians on the basis that technology is ‘rational’ and is juxtaposed against the ‘irrationality’ of how humans actually live and operate in the world.

Broad and in conclusion, then, techno-utopianism is functionally an issue of power and domination. We see ‘tech bros’ and traditional power brokers alike advancing solutions to their perceived problems, and this approach may be further reified should legislation be passed to embed this conceptual framework more deeply into democratic nation-states. What is under-appreciated is that while such legislative efforts may make certain techno-utopian activities lawful the subsequent actions will not, as a result, necessarily be regarded as legitimate by those affected by the lawful ‘socially beneficial’ uses of de-identified personal data.

The result? At best, ambivalence that reflects the population’s existing alienation from democratic structures of government. More likely, however, is that lawful but illegitimate expressions of ‘socially beneficial’ uses of data will further delegitimize the actions and capabilities of the states, with the effect of further weakening the perceived inclusivity of our democratic traditions.

Aside

2022.7.31

Fungi by Christopher Parsons

After spending far too much time agonizing over what to get printed I finally put in an order for 21 prints. Most are black and white from the past 2-3 years, and will be used to create 1-2 gallery walls and refresh another wall.

Untitled by Christopher Parsons

I’m looking forward to getting them through I’m working with a new printer so have some minor degrees of anxiety over what I’ll end up with. I’ve generally had good luck with local printers but the past few personal photo books I’ve had printed (albeit from international companies) have been disappointing when I’ve gotten them in my hands.

The next step will be to purchase a raft of frames for all the prints. And then, finally, actually add them all to my walls!

Adding Context to Facebook’s CSAM Reporting

In early 2021, John Buckley, Malia Andrus, and Chris Williams published an article entitled, “Understanding the intentions of Child Sexual Abuse Material (CSAM) sharers” on Meta’s research website. They relied on information that Facebook/Meta had submitted to NCMEC to better understand why individuals they reported had likely shared illegal content.

The issue of CSAM on Facebook’s networks has risen in prominence following a report in 2019 in the New York Times. That piece indicated that Facebook was responsible for reporting the vast majority of the 45 million online photos and videos of children being sexually abused online. Ever since, Facebook has sought to contextualize the information it discloses to NCMEC and explain the efforts it has put in place to prevent CSAM from appearing on its services.

So what was the key finding from the research?

We evaluated 150 accounts that we reported to NCMEC for uploading CSAM in July and August of 2020 and January 2021, and we estimate that more than 75% of these did not exhibit malicious intent (i.e. did not intend to harm a child), but appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem.

This finding is significant, as it quickly becomes suggestive that the mass majority of the content reported by Facebook—while illegal!—is not deliberately being shared for malicious purposes. Even if we assume that the number sampled should be adjusted—perhaps only 50% of individuals were malicious—we are still left with a significant finding.

There are, of course, limitations to the research. First, it excludes all end-to-end encrypted messages. So there is some volume of content that cannot be detected using these methods. Second, it remains unclear how scientifically robust it was to choose the selected 150 accounts for analysis. Third, and related, there is a subsequent question of whether the selected accounts are necessarily representative of the broader pool of accounts that are associated with distributing CSAM.

Nevertheless, this seeming sleeper-research hit has significant implications insofar as it would compress the number of problematic accounts/individuals disclosing CSAM to other parties. Clearly more work along this line is required, ideally across Internet platforms, in order to add further context and details to the extent of the CSAM problem and subsequently define what policy solutions are necessary and proportionate.

Solved: Changed Name Server and Apple Custom Email Domain Stopped Working

Photo by Miguel u00c1. Padriu00f1u00e1n on Pexels.com

I recently moved a self-hosted WordPress website from a shared hosting environment to WordPress.com. The migration was smooth: I had to export the XML for the self-hosted WordPress installation and import it to the WordPress.com CMS, and then fix a few images. The website is functioning well and the transition was smooth.

However, shortly after doing so I started having issues with receiving emails at my custom email which was set up with Apple’s iCloud Custom Email Domain. Not good!

The Problem

I changed the name servers with the domain registrar (e.g., Bluehost or Dreamhost) so that my custom domain (e.g., example.com) would point to the WordPress.com infrastructure. However, in doing so my custom email (user@example.com) that was using Apple’s iCloud Custom Email Domain stopped sending or receiving email.

This problem was surfaced because email could not be sent/received and, also, I could not verify its domain in Apple’s “Custom Email Domain”. Specifically, iCloud presented the dialogue message “Verifying your domain. This usually takes a few minutes but could take up to 24 hours. You’ll be able to continue when verification is complete.” The “Reverify” button, below the dialogue, was greyed out.

Background

When you have registered the domain with a registrar other than WordPress (e.g., Bluehost, Dreamhost, etc) and then host a website with WordPress.com you will have to update the name servers the domain uses. So, you will need to log into your registrar and point the name servers at the registrar to NS1.Wordpress.com, NS2.Wordpress.com, and NS3.Wordpress.com. In doing so, all the custom DNS information you have provided to your registrar, and which has been used to direct email to a third-party email provider such as Apple and iCloud, will cease to work.

The Solution

When transitioning to using WordPress’ nameservers you will need to re-enter custom domain information in WordPress’ domain management tabs. Specifically, you will need to add the relevant CNAME, TXT, and A records.1 This will entail the following:

  1. Log into your WordPress.com website, and navigate to: Upgrades >> Domains
  2. Select the domain for which you want to modify the DNS information
  3. Select “DNS Records” >> Manage
  4. Select “Add Record” (Upper right hand corner)
  5. Enter the information which is provided to you by your email provider

Apple iCloud Custom Domain and WordPress.com

When setting up your custom domain with Apple you will be provided with a set of TXT, MX, and CNAME records to add. Apple also provides the requisite field information in a help document.

While most of these records are self evident, when adding the DKIM (CNAME record-type) record in WordPress.com, the Host listed on Apple’s website is entered in the “Name” field on WordPress’ “Add a Record” page. The “Value” of the DKIM on Apple’s website is entered as the “value” on WordPress’ site.

TypeNameValue
CNAMEsig1._domainkeysig1.dkim.example.com.at.icloudmailadmin.com
Visualization of Adding iCloud CNAME Record for WordPress.com


Note: Apple will generate a new TXT record to verify you control the domain after pointing the name servers to WordPress.com. This record will look something like “apple-domain=[random set of upper/lower case letters and numbers]”. You cannot use the “apple-domain=“ field that was used in setting up your custom email information with your original registrar’s DNS records. You must use the new “apple-domain=“ field information when updating your WordPress.com DNS records.

Once you’ve made the needed changes with WordPress.com, and re-verified your domain with Apple’s iCloud Custom Domains, your email should continue working.

In the Future

It would be great if WordPress actively and clearly communicated to users who are pointing their name servers to WordPress.com that there is a need to immediately also update and add email-related DNS records. I appreciate that not all customers may require this information, but proactively and forcefully sharing this information would ensure that their customers are not trying to fix broken email while simultaneously struggling to identify what problem actuallyy needs to be resolved.


  1. WordPress does have a support page to help users solve this. ↩︎

So You Can’t Verify Your Apple iCloud Custom Domain

Photo by Tim Gouw on Pexels.com

When you set up a custom iCloud email domain you have to modify the DNS records held by your domain’s registrar. On the whole, the information provided by Apple is simple and makes it easy to set up the custom domain.

However, if you change where your domain’s name servers point, such as when you modify the hosting for a website associated with the domain, you must update the DNS records with whomever you are pointing the name servers to. Put differently: if you have configured your Apple iCloud custom email by modifying the DNS information at host X, as soon as you shift to host Y by pointing your name servers at them you will also have to update DNS records with host Y.

Now, what if you don’t do this? Eventually as DNS information propagates over the subsequent 6-72 hours you’ll be in a situation where your custom iCloud domain email address will stop sending or receiving information because the routing information is no longer valid. This will cause Apple’s iCloud custom domain system to try and re-verify the domain; it will do this because the DNS information you initially supplied is no longer valid.

Should you run into this issue you might, naturally, first reach out to Apple support. You are, after all, running your email through their servers.

Positively: you will very quickly get a real-live human on the phone to help you. That’s great! Unfortunately, however, there is very little that Apple’s support staff can do to help you. There are very, very few internal help documents pertaining to custom domains. As was explained to me, the sensitivity and complexity of DNS (and the fact that information is non-standardized across registrars) means that the support staff really can’t help much: you’re mostly on your own. This is not communicated when setting up Apple custom email domains.

In a truly worst case scenario you might get a well meaning but ignorant support member who leads you deeply astray in attempting to help troubleshoot and fix the problem. This, unfortunately, was my experience: no matter what is suggested, the solution to this problem is not solved by deleting your custom email accounts hosted by Apple on iCloud. Don’t be convinced this is ever a solution.

Worse, after deleting the email accounts associated with your custom iCloud domain email you can get into a situation where you cannot click the re-verify button on the front end of iCloud’s custom email domain interface. The result is that while you see one thing on the graphical interface—a greyed out option to ‘re-verify’—folks at Apple/server-side do not see the same status. Level 1 and 2 support staff cannot help you at this stage.

As a result, you can (at this point) be in limbo insofar as email cannot be sent or received from your custom domain. Individuals who send you message will get errors that the email identify no longer exists. The only group at Apple who can help you, in this situation, are Apple’s engineering team.

That team apparently does not work weekends.

What does this mean for using custom email domains for iCloud? For many people not a lot: they aren’t moving their hosting around and so it’s very much a ‘set and forget’ situation. However, for anyone who does have an issue the Apple support staff lacks good documentation to determine where the problem lies and, as a result, can (frankly) waste an inordinate amount of time in trying to figure out what is wrong. I would hasten to note that the final Apple support member I worked with, Derek, was amazing in identifying what the issue was, communicating the challenges facing Apple internally, and taking ownership of the problem: Derek rocks. Apple support needs more people like him.

But, in the absence of being able to hire more Dereks, Apple needs better scripts to help their support staff assist users. And, moreover, the fact that Apple lacks a large enough engineering team to also have some people working weekends to solve issues is stunning: yes, hiring is challenging and expensive, but Apple is one of the most profitable companies in the world. Their lack of a true 24/7 support staff is absurd.

What’s the solution if you ever find yourself in this situation, then? Make sure that you’ve done what you can with your new domain settings and, then, just sit back and wait while Apple tries to figure stuff out. I don’t know how, exactly, Apple fixed this problem on their end, though when it is fixed you’ll get an immediate prompt on your iOS devices that you need to update your custom domain information. It’s quick to take the information provided (which will include a new DKIM record that is unique to your new domain) and then get Apple custom iCloud email working with whomever is managing your DNS records.

Ultimately, I’m glad this was fixed for me but, simultaneously, the ability of most of Apple’s support team to provide assistance was minimal. And it meant that for 3-4 days I was entirely without my primary email address, during a busy work period. I’m very, very disappointed in how this was handled irrespective of things ultimately working once again. At a minimum, Apple needs to update its internal scripts so that their frontline staff know the right questions to ask (e.g., did you change information about your website’s DNS information?) to get stuff moving in the right direction.

Link

Vulnerability Exploitability eXchange (VEX)

CISA has a neat bit of work they recently published, entitled “Vulnerability Exploitability eXchange (VEX) – Status Justifications” (warning: opens to .pdf.).1 Product security teams that adopt VEX could assert the status of specific vulnerabilities in their products. As a result, clients’ security staff could allocate time to remediate actionable vulnerabilities instead of burning time on potential vulnerabilities that product security teams have already closed off or mitigated.

There are a number of different machine-readable status types that are envisioned, including:

  • Component_not_present
  • Vulnerable_code_not_present
  • Vulnerable_code_cannot_be_controlled_by_adversary
  • Vulnerable_code_not_in_execute_path
  • Inline_mitigations_already_exist

CISA’s publication spells out what each status entails in more depth and includes diagrams to help readers understand what is envisioned. However, those same readers need to pay attention to a key caveat, namely, “[t]his document will not address chained attacks involving future or unknown risks as it will be considered out of scope.” Put another way, VEX is used to assess known vulnerabilities and attacks. It should not be relied upon to predict potential threats based on not-yet-public attacks nor new ways of chaining known vulnerabilities. Thus, while it would be useful to ascertain if a product is vulnerable to EternalBlue, today, it would not be useful to predict or assess the exploited vulnerabilities prior to EternalBlue having been made public nor new or novel ways of exploiting the vulnerabilities underlying EternalBlue. In effect, then, VEX is meant to address the known risks associated with N-Days as opposed to risks linked with 0-Days or novel ways of exploiting N-Days.2

For VEX to best work there should be some kind of surrounding policy requirements, such as when/if a supplier falsely (as opposed to incorrectly) asserts the security properties of its product there should be some disciplinary response. This can take many forms and perhaps the easiest relies on economics and not criminal sanction: federal governments or major companies will decline to do business with a vendor found to have issued a deceptive VEX, and may have financial recourse based on contactual terms with the product’s vendor. When or if this economic solution fails then it might be time to turn to legal venues and, if existent approaches prove insufficient, potentially even introduce new legislation designed to further discipline bad actors. However, as should be apparent, there isn’t a demonstrable requirement to introduce legislation to make VEX actionable.

I think that VEX continues work under the current American administration to advance a number of good policies that are meant to better secure products and systems. VEX works hand-in-hand with SBOMs and, also, may be supported by US Executive Orders around cybersecurity.

While Canada may be ‘behind’ the United States we can see that things are potentially shifting. There is currently a consultation underway to regenerate Canada’s cybersecurity strategy and infrastructure security legislation was introduced just prior to Parliament rising for its summer break. Perhaps, in a year’s time, we’ll see stronger and bolder efforts by the Canadian government to enhance infrastructure security with some small element of that recommending the adoption of VEXes. At the very least the government won’t be able to say they lack the legislative tools or strategic direction to do so.


  1. You can access a locally hosted version if the CISA link fails. ↩︎
  2. For a nice discussion of why N-days are regularly more dangerous then 0-Days, see: “N-Days: The Overlooked Cyber Threat for Utilities.” ↩︎