IWD - What I would still say now

A few years after this interview, I still find myself coming back to the same questions: representation, leadership, defaults, and who gets to shape the systems we build.

IWD - What I would still say now

A few years ago, Catarina Peyroteo Salteiro interviewed me for a Women in Tech series at Defined.ai.

At the time, I was leading product and design, had recently started taking on the engineering organization as well, and we were pushing into a new product line around data model evaluation and the data marketplace. It was March 8, 2022: before ChatGPT, before Stable Diffusion’s public release, and right around the moment DALL·E 2 was being introduced. The generative AI wave that would later reshape the data industry had not fully hit yet.

Watching the interview again now is a slightly strange experience. Some of the references belong very clearly to another moment in the industry. But I can also already hear a few ideas I still care about now, even if I would phrase some of them differently today, and probably with a bit less patience for euphemism.

In the conversation, I spoke about a path that was never especially linear: from linguistics and academia into natural language processing, computer science, technology, product, and leadership. I spoke about representation in data, about bias, about the importance of role models, and about the fact that inclusion cannot be treated as the homework of the underrepresented.

That all still feels true.

What has changed is the scale of the systems we are building on top of, and with it, the weight of the choices underneath. Since then, I have spent more time in executive and advisory roles, helping shape products, teams, and organizations in environments where ambiguity is high, resources are finite, and the easy answer is often to accept the default. My more recent work, including at Mozilla.ai, has only made some of those earlier instincts sharper: questions of openness, control, dependency, and representation are not separate questions. More and more, they are the same problem viewed from different layers of the stack.

Back then, I was talking mostly about representativeness in data: whether people can recognize themselves in the systems we build, whether language, culture, humor, age, and different ways of being in the world are actually reflected in the product. Today, I find myself thinking just as much about representativeness in infrastructure, governance, and power: who gets to shape the systems, who gets locked into them, who gets left out of the defaults, and who still has room to choose.

The questions got bigger. The underlying concern did not.

I also smiled watching myself answer the question about the role of women in male-dominated industries by pushing back on the premise a bit. I still agree with that version of me. Inclusion was never going to be solved by asking those who are underrepresented to become infinitely resilient, polished, helpful, and inspiring while everyone else keeps the keys. That was nonsense then, and it is nonsense now. The same is true well beyond women in tech. It applies to any group expected to adapt quietly to systems, cultures, and defaults that were not designed with them in mind.

There is another moment in the interview where I react against the diminutive language often used around women in leadership, the whole irritating ecosystem of labels that somehow manage to patronize and celebrate at the same time. I still have very little patience for that. More broadly, I have little patience for any language that marks some leaders as normal and others as exceptions, novelties, or special cases. A CEO is a CEO. A leader is a leader. We do not need linguistic glitter every time someone from an underrepresented group enters the room.

If I were answering the same questions today, especially in the context of International Women’s Day, I would still absolutely speak about women in tech and leadership. But I would connect that conversation more explicitly to a broader one: what kinds of systems and organizations we still reward, what kinds of people we still make improbable at the top, and why diversity is not just a moral goal or a visual one, but a performance advantage. Teams, products, and institutions get better when they are forced to think beyond their own defaults.

And maybe that is what I would say to the Julie in this video: keep going, yes, but keep sharpening too. Stay curious. Stay difficult in the right ways. Do not let polish replace point of view. Do not confuse consensus with quality. And do not wait too long for permission from systems that were not designed with you in mind anyway.

I am sharing the video here not because I think old interviews are sacred artifacts. They are not. Most are just time capsules with questionable lighting and at least one answer you would now edit with a red pen. But this one reminded me that some questions stay worth asking, even as the stack changes underneath them.

If anything, they become more urgent.

We spend a lot of time asking whether technology is inclusive, fair, or responsible. Those are good questions. But they are incomplete. A harder one is this: who gets to define the defaults in the first place, and who is expected to adapt to them?

That question felt important to me then.

It feels even more important now.