Summit 10 Key
Takeaways
1. Canada’s Edge Lies in Its Places: To attract talent, spark innovation, and tackle big challenges, Canada must level up the quality of its spaces.
2. Fight Polarization Locally: The erosion of trust in institutions starts and ends in our communities—local action can heal the divides.
3. Build for Beauty and Impact: Infrastructure isn’t just functional—it’s equity, climate resilience, culture, and meaning, all rolled into one. And it’s not inflationary.
4. Act Now by Starting Somewhere: Canada’s housing and mental health crises are everywhere, but proven solutions exist. We need to scale what works—urgently—by learning from the best.
5. Think Local, Act Local: Big changes start small. Empower communities with tools and resources to adapt and scale their solutions.
6. Diversify How We Invest: Canada needs flexible investment tools for every scale and every investor—public, private, and institutional.
7. Data Over Divisions: Drop the politics and act on the facts. Good data drives real change.
8. Digitize for Civic Power: Prioritize digital tools, AI, and accessible data to supercharge decision-making and civic innovation.
9. Own the Public Realm: Progress rests on leveraging the three P’s: procurement, public land, and the public realm.
10. Take Accountability: Canada’s future hinges on a resolution of longstanding jurisdictional problems. Devolve power and resources to communities to realize their full potential.
Full Panel
Transcript
Note to readers: This video session was transcribed using auto-transcribing software. Questions or concerns with the transcription can be directed to citytalk@canurb.org with “transcription” in the subject line.
Benjamin de la Peña My name is Benjamin de la Pena. I’m with the Shared Use Mobility Center. Let me bring up my panel. I’m a Gen Xer pretending to be a millennial. But don’t worry, I won’t use any Gen Z terms on you. So we will make sure this is “high res” and we won’t be in “skibidy toilet”. For those of you who do not have children, you’ll have to get that translated. Is that it? The four of us. Okay. All right. I’m going to do what Mary did and I’m going to pull out my phone because I need my notes. All right. This is harnessing AI and Data for New Technologies for Canada’s future. So the morning was talking about problem structures and all of that. We’re going to take you a bit to the future since, of course, AI is all over, and is all the hype. And then we’re going to go back and say, “okay, what does that mean for now?” So I’ve got Myrna Bittner, the CEO and founder of RUNWITHIT Synthetics Rupen Seoni is the Chief Client Officer of Environics Analytics. Did I say that correctly? [Yes, you did]. Yes. Carol McClellan is the Canada Discipline Lead digital and Geospatial at Arup and Ariana Saferiades, did I say that correctly? Is Responsible AI, I like that term Responsible AI, which means all other AI is irresponsible … Manager at Mila, Québec AI Institute. All right, let’s jump into it. One for each of you. Right. Paint your most optimistic picture, ten years from now, we get AI and data infrastructure light, what does it look like and what does it do for people? I’ll start with Ariana over there.
Ariana Seferiades All right. So I like how you introduce me because, yes, AI is not responsible. And so that’s why it’s important for me imagining these optimistic future as Canadian cities leading in responsable innovation. So having a clear vision and the strategy behind the use of tech and data. This means that building technology to actually benefit citizens and put in their needs first – equity and sustainability. So I have two things in mind when imagining this optimistic future. The first thing is that we will have better and more efficient infrastructure and services. And in that regard, Mila and UN Habitat actually published two years ago a report that called AI and cities and it includes a lot of use cases in different domains and key areas in cities like energy, transportation, waste management, health care and much more. So I really encourage you to go and retrieve the report because it includes actionable recommendations on implementing AI in cities responsably. And the second thing is that we we have more empowered cities and leaders. So I come from education and education background and I work in education at Mila right now. And I see how important it is to have citizens that understand that they are interacting with the AI, what is AI? But what AI can do and cannot do. So understand the limitations of AI and if something goes wrong when they are interacting with an AI system, they have actually the understanding of where to go to.
Benjamin de la Peña And I can yell at the AI.
Ariana Seferiades Exactly. And who’s responsible right behind that obscure black box of AI. So all in all, I imagine more empowered cities and more empowered citizens and leaders that can actually drive this information responsibly.
Benjamin de la Peña Fantastic … Myrna …
Myrna Bittner Oh no. So I have been in AI since the 1990s and in deep neural nets and language models.
Benjamin de la Peña That’s when AI was dial up!
Myrna Bittner This was when AI was dial up, this was when doing absolutely as much as humanly possible and then some with as little hardware and compute as was available was key to the success and it was quite successful. So it set the foundation for what we’re seeing today. But when I look forward, if this is what this question is about, you know, I’ve been through a couple of winters, you know, in AI and it’s looming. The signals are there. So the stock price drops and in videa, the understanding now when communities are beginning to just just slightly become aware of how hungry datacentres are for power, of how much they’re going to consume in terms of water and how they’re actually going to compete for that infrastructure with communities themselves …
Benjamin de la Peña I’m going to pursue that later.
Myrna Bittner And so the positive part is I think the positive part of winter is that things go back to AI shall be a tool, it should be a tool that we use respectfully, carefully, transparently. We should be constantly advancing, you know, as you mentioned, the science and the understanding behind removing bias, behind what training on historical data means. And I’m really looking forward to that return, that elimination of the hype, which is just a rush into, you know, into some really concerning territory in terms of its uses and the amalgamation of that in some very powerful companies. But an actual step back into how do we use it to augment the really important work we need to get done today? The really important work in infrastructure and social infrastructure and sustainability. And I think that that is there because that, you know, AI will persist. The companies that are working on it, like my own, are succeeding and we’re just waiting for that kind of elimination of the hype so that people can actually see it as a tool and they don’t lose trust in AI in general that’s been done poorly.
Benjamin de la Peña That’s an interesting note on losing trust, right? Because it’s not just the AI, we’re losing trust in institutions generally, in this age. I’ll re-visit that in a while, but I’ll go to Rupen and say, “what’s the optimistic picture?” And, you know, feel free to dream because we’ll tear it down later.
Rupen Seoni Okay. Okay. Well, Myrna, I feel like maybe we’ve lived parallel lives or something, because I remember back in the late 90s … So just a little context. Well, what we do at Environics Analytics and at a at a company I was at 30 years ago is to create very very localized estimates of thousands of data points that come from data that are at a higher kind of level of availability that allows us to collate data and understand populations at a very granular level. So you can imagine it’s a lot of data to process. And I remember back in 1999, I think it might have been, when we used neural nets for the first time to help us create solutions on a, you know, typology of Canadians in their neighborhoods. And it really helped us move that along faster. And then in the 21 years of existence that we’ve had in Environics Analytics, we’ve progressively used machine learning techniques to be able to create the models that drive all of this data down for these low level estimates that allow us to make better decisions in all kinds of ways and our clients to do that, all through that. So we’ve kind of seen the power of it firsthand for a long time. But I think, you know, where we’re going is really … My colleague, Nader, who’s sitting over here was saying that, you know, our clients really just want a crystal ball. And I think AI is really the closest thing that that we’re going to have to that crystal ball. Rather than being descriptive as we’ve largely been in the past, we’ll become much more predictive. And in a municipal context or a city context, preventative in identifying issues before they emerge so that we can address them. And I think multifaceted would be the other piece because you know we’re able now to … and we will be able to in a much better way get at that intersectionality between economic impacts, traffic impacts, sociodemographic impacts and look at that all together before we do something. So I think that’s really what we’re going to be able to do, which will help us make better decisions.
Benjamin de la Peña Wonderful future. Carol, and then after you tell us your future, I want you to take us to – what are you seeing now, which some of you mentioned, myrna, I’ll jump to you after. What are we seeing now that are indicators that we’re going to get there, the good signs, and then what are the signs that are worrying you? So tell us about your future – that’s optimistic. What’s happening now that makes you think we will make it there in ten years? And what’s happening now that kind of worries you?
Carol McClellan Okay. Sure. So first off, thanks, Rupen, for bringing up an intersectionality, because I think today we’re hearing a lot about, you know, the basic problems that we’re continuing to struggle with in cities, right? So we’re struggling with housing, we’re struggling with mental health, we’re struggling with, you know, access to food and things like this. People are struggling to get their basic needs met. And this is an area … and we’re hearing also about silos and breaking down silos and taking more of a systems based approach. This is all areas that AI can … and data and good data management can help to support in making sure that we’re collaborating, that we’re enabling those engagements and getting all the perspectives around the table and making sure that we’re we’re representing our communities and our citizens correctly when we’re taking that and trying to apply it in a design environment and making sure that we’re testing our designs and making sure that we’re designing and more likely to get the desired outcomes when we are actually, you know, implementing infrastructure projects and that sort of thing. AI has the opportunity for, you know, as Myrna alluded to, back in the 1990s, early 2000, even, I had a cloud computing coming available. We had to be very, very selective in how we modeled things. We had to, you know, we had to do it at a smaller scale. We had to pick certain scenarios. Now we have the opportunity to throw it all into into the cloud and have it crunch and iterate on those things. And you know, tease through it, I think the human spirit will always be part of AI And the acknowledgment of the demographic in the room, I think is very important because I think we, you know, as we’re getting to that older part of our careers, I think we have a really …
Benjamin de la Peña I like how you looked at that – older part of our careers …
Carol McClellan No, it’s me too. I think we’re all feeling that, you know, getting getting …
Benjamin de la Peña So is the thing that’s worrying you.
Carol McClellan I think, well, I think … this is our responsibility is teaching our next generation who are coming, you know, like there’s a lot of people in our generation that are a little intimidated by technology. Right? It’s hard to have good conversations with people that aren’t necessarily working deep in this space, where they’re not glazing over and getting intimidated and not necessarily picking up on everything. The newer generations are growing up with this stuff. They’re growing up with this stuff. They’re so … Like the people that are in my team that are coming out of university are doing stuff that I didn’t even think of doing, you know, coming out of university. And I think it’s critical for us to take that responsibility to teach our next generation what are the things that we need them to help us break down and use technology to break down?
Benjamin de la Peña I’m going to jump to Myrna now because she started bringing up … I also put a little anecdote … After the Moonshot, the US government thought that we can all solve our problems through moonshots. And this was in the 1960s and computers will help us. In fact, HUD, the Housing and Urban Development Department in the US, is built that way because it was supposed to have a central computer that would solve housing, and we know what happened with that. So anyway, Myrna, take us to what are the warning signs, what are the good signs. You started talking about it.
Myrna Bittner I think some of the good signs are that there is just extraordinarily good work being done. So, you know, even in hearing the need, you know, in the persistent kind of conversation this morning about we need to see place, we need to see data, we need to see interplay. We need to see how things interconnect. We need to you know, that that’s the call of the future seekers who are setting out to say, you know, we actually have to define what a goal state is for sustainability for planet and people in our communities. And we need tools that help us see that, that help us measure it, quantify it and act on it. And I think that is the the exciting part about, you know, the future about where we are today. My company actually creates real live living models of regions and cities and their populations in order to do that work. So we we generate the data that’s missing. We augment what people can’t see. We visualize to make it more accessible, you know, and we meet the most fascinating leaders from around the country, all different levels of government, most profoundly and interestingly, driven by the industry that lives in your cities. So whether or not you know it, they’re synthesizing you and and they’re looking at what their opportunities are given the landscape and given what they can see projected on it. So I think that to me is is a really exciting part about, you know, being where we are today as people are recognizing that. Some of the concerning parts, again, are that in this rush to say quantum, we want to see every scenario. We don’t actually need to see every scenario. Every scenario takes just a massive amount of compute. I think they were saying every time you ask CHAT GPT a question, it’s like three bottles of water you are consuming, or cooling. And that question could be, you know, what color socks should I wear today? It’s irrelevant. And yet those things are consuming our resources. They’re consuming the infrastructure that we are building. They’re consuming, they’re outstripping, you know, our water conservation efforts, our energy efficiency efforts. And so we have to begin to value something different. And that valuing different can be compute less. That should be the race for AI is how do we minimize absolutely the energy cooling requirements, the use cases that we put it to because it is voracious.
Benjamin de la Peña I really like that point. It also is kind of a double impact because your example was benign. What color socks do I wear? I just go with black because that’s pretty easy. But it’s also used for dis-information, right? So you have both a planetary and a social impact on it. Rupen, tell us, you know, the positive side of what you think is happening that’s good now, right, on your radar. And then what are the worrying things? The same question, right. And Ariana, I’ll turn to you with that too, so what what do you think.
Rupen Seoni I’m actually going to start with the worry thing … It’s going to pick up on what you just said Myrna. You know, the worrying thing, I think the consumption of energy is just … look at the model that the generative AI, the companies that are putting those models … It’s a consumer model. Put it in everybody’s hands. So we’re using it all the time for the most inane things. Like I met a friend of mine the other day for lunch and he used CHAP GPT to suggest a place because he’s new to Ottawa. And of course we got there and the place was closed. It wasn’t available for lunch. And the question was “Where should we go for lunch today?” So, I mean, just as an example. So, you know, a simple Google search would have revealed at much, much lower energy consumption, the same … the correct answer. But the point is that money always wins in these things. And that’s the model that the AI companies are using to get a whole bunch of people hooked so then they can start charging. So that’s one concern. But I think the other concern obviously is around errors. And we haven’t wrapped our heads around where there are mistakes. And that speaks to the misinformation, either mistakes like I experienced or, you know, the deliberate ones. So how do we get our heads around that? Because the scale is is just so large and the impacts, the potential for impact is huge. So that’s obviously one of the big downsides. The positives, though, you know, and I’ll take a very kind of specific sort of area that we always experience when we work with clients that are trying to do modeling. There’s always that 80/20 rule, like they spend 80% of the time just trying to get the damn data together and get it organized so you can actually do the models. So I think what AI is providing is, is the ability to do that way more efficiently so that 80% of the effort is actually spent on interpreting and understanding what we could do. So that … It’s a much … it’s a huge efficiency gain in the context of modeling scenarios.
Benjamin de la Peña Can we also train the AI to look at the data and say what’s missing rather than saying we conclude this, right? Because the worry is it’s black box. Right. And another anecdote … coming through the airport last night, I got called for extra screening and it was okay, that happens. And I made a side joke about, “yeah, pick the brown person”, the guy at Customs, your Border Patrol is saying, “well it’s the machine that picks it.” Which makes it sound like, “oh well, it’s completely objective because the machine picked it.” And so, you know, this worry of abstracting that so much, like your friend looking for a place to eat, like the AI knows what eating is and knows what flavor is, right? It doesn’t even have taste buds, right? So it’s really, really interesting. Ariana, tell us your thoughts …
Ariana Seferiades Just picking up on that and what everyone said, and one of the things that I …. It concerns me the most is this mindset that we call techno-solutionism. So this idea that basically AI is going to solve all our complex issues and it’s a quick fix and it’s easy to implement and it’s objective and that’s not true. And actually it’s a worrying trend because it leads to poor implementations and it doesn’t actually attend to the root causes of issues, right? Like we saw … like last week, I was watching the news. You know, you always see headlines on AI and you see AI used to actually decrease homelessness. And you start asking this question like, “ok, how is it being used?” I mean, it could be used to anticipate people that are about to lose their houses, which is great. But that doesn’t mean that is actually going to solve the problem of homelessness. So just to keep that in mind and in terms of more positive trends, I would say that for me I’m seeing an increasing attention paid to capacity building and AI education and among leaders, public servants, policymakers, decision makers. Just last week I was here in Ottawa with my whole team and we delivered the AI policy compass. It is a learning program for policy professionals that we have at Mila with public policy forum here in Ottawa and in Quebec. And we even made them do a coding exercise to actually assign parking spaces in the city with computer vision. And we had such amazing conversations. They share the challenges that we’re facing with the AI in the cities and municipalities, and that was encouraging. And the second thing I would say is that cities are actually emerging as key actors right now. So the absence of major regulation or in a ecosystem of fragmented governance. We see cities experimenting with AI and shaping AI governance. We have the case, for example, of Barcelona and New York that are actually pioneering, already in 202, AI’s strategies that include public registries of algorythms and legislation on the use of data and on open data. So that’s that’s really encouraging.
Benjamin de la Peña Fantastic. So the next part is going to be a long compound question. But I’m going to telegraph you ahead to my closing round we’re a roomful of, you know, computer scientists, as you can see, totally sarcastic about that. I am a transportation nerd. So my last question, which I’m asking you to prep for, but we’ll get to that, is what’s your three word bit of advice to the city leaders as they’re trying to grapple with AI and data infrastructure? Right. What kind of investments? But here’s the long compound question, and I will preface it with two readings. And I want to kind of react to that. And then I’ll ask to frame the question for you guys. The first is Shannon Beller. She’s a philosopher at University of Edinburgh. Right. In Edinburgh. And she said, “you may have heard that the idea that AI is a stochastic parrot basically just repeats things we say.” And she says, “Not really. AI is a mirror and it’s a funhouse mirror. It takes all of the things we’ve written and reflects it back to you, thinking that this is actually how you actually look”, right? And then there’s another one, which Rupen, you mentioned this right, it’s like Shane MacDonald at Parker Digital Public IO was saying “at the technical level, the single most unifying characteristic of models that we call AI is they add huge amounts of computation and analysis to whatever problem or tasks they’re applied to, whether it’s – let’s solve housing or where do I take lunch?” Right. And the thing is, very few companies or cities actually have that infrastructure. So as Rupen says, it’s follow the money. You’re not really developing intelligence as much as they’re developing a service that they want you to subscribe to. Now, those are both stingers and prompts. That doesn’t mean that that’s how I believe these things. But in that context, and I am a transportation nerd, so we talk infrastructure all the time and we were talking about highways in another conversation, standardization of highways, which on one hand was a success because we built out these highways that connected the cities and moved commerce, created markets, right? And that was successful. But at the same time, it was the failure because we standardized highways and so pushed away pedestrians and made our streets dangerous. Three part compound question right? In that context, what … as you’re looking at the work that cities and companies and institutions you work with are looking at, what is the potential highway infrastructure that we are dangerously needing to build and dangerously not really thinking clearly about. You can take that question any which way you want. And so I’m going to leave it to say who wants to answer first?
Myrna Bittner I can start rambling so other people can think a little bit. When I think about the first part, the funhouse mirror. One of the big dangers about AI is always the assumption and it’s always the training assumptions that are made. And I would argue that, you know, even in that question or that statement was an assumption because there is a significant movement towards explainable AI and that’s very incredibly well documented. The biases in it are are known and they’re being worked on. And it is open and it is transparent and it’s not black box. So, again, you know, that ability to understand that there’s differences in the world of AI, dramatic differences in the world of explainable and ethical AI and because we don’t want to perpetuate the trauma and the bias that has been, you know, baked into economies, which I’ll hit the second point, which is just based on the ultimate goal is money. And I think that’s a big failure in business. I’m a business person, but that’s a big failure when all conversations – you know, around the table I was sitting at last night ended in and there’s all these wonderful things we should be doing and there’s all … but then the developers and then they’re just businessmen. And so they make their decisions based on per square foot. And so it’s kind of like the world cannot afford to keep ending in “it’s all about money”. And so I think there needs to … and that’s part of the holding to account what’s going on in AI that’s, you know, not right and that’s unproductive and that’s dangerous and damaging. And on the alternative, you know supporting the uses, the explainability, the AI, that is crucial to us being able to understand complex issues and being able to see people and hear voices that have never been seen or heard or included in decision making before. That’s complex. That’s around social infrastructure, tied to transportation, infrastructure, tied to energy, tied to, you know, new opportunities opening up with how we see community, place based intelligence. That’s the AI that should be foundational for moving our futures forward. Takes us completely out of the fun house, takes us completely away to the dysfunction that’s going on in a lot of the Gen-AI world. And my company is a Gen-AI company but out of that dysfunctional Gen-AI I that’s happening now, makes us hyper aware of just what the cost of that is in terms of our environment, in terms of the water, the energy consumption. We can’t let data centers and just the proliferation of more and more and more and more data centers turn us back to where we have to maintain our reliance on fossil fuels to fuel our energy grid just so they can have an uninterrupted source of power. Like we have to be more cognizant and forward looking and careful and celebrate the success of how we can use it as a tool to save our cities.
Benjamin de la Peña And can I pursue that very quickly. I agree. Clap please. Can we actually make it planet friendly?
Myrna Bittner Yes, we can. So we actually have an initiative called the Data Potata where through very conscious efforts in Gen AI and base compute, we can actually minimize the amount of energy because we don’t need to store the data, we can generate it when we need it. We can develop algorithms that actually generate the data, regenerate the data so we can flush it and we only need to use it when we need to use it. So there’s all kinds of initiatives, but they’re not highly recognized because they’re not glamorous.
Benjamin de la Peña And you’re not raising $4 billion.
Myrna Bittner We’re not raising $4 billion. We’re a certified Indigenous women-led, deep tech, weird company from Edmonton, Alberta, Canada, that is changing the world but doesn’t have access to the hype money.
Benjamin de la Peña So data potata is your thing. Ariana go ahead
Ariana Seferiades Maybe going to add that there is also that question and not just finding like tech solutions to the energy problem, but also asking ourselves, okay, is this our future or are we just developing more generative AI just to prompt Chat GPT … Where are we going to have lunch today? So it’s just asking the question of what type of AI we want to develop. Generative AI and large language models is not the only type of AI, and it’s just what won the mainstream conversation in money probably in the last couple of years. So just to have in mind and going back to your highway problem, I love this analogy and I was just thinking on this analogy and I see how like, we wanted to build infrastructures for people and we ended up like stuck with the infrastructure for cars, around cars and not people. And we may risk doing the same with the AI. We want to risk, we want to build infrastructure to respond to citizen needs. But then we may just build infrastructure centered around algorithms. So I see two main problems for me. The first one is the problem of overreliance, what we call overreliance. So imagine like you’re trying to go somewhere and your car breaks down or there’s a lot of snow, like today, so you’re stuck, right? So in the same way as in the highway AI can fail us and this is already happening because as Rupen was saying, AI is prone to error. So just this year, I don’t know if you’ve seen in the news, but in the Netherlands an AI system failed and had errors and stopped making essential payments to general citizens and that created a huge problem. And who’s going to fix this? Right? Because we, the people that are no computer scientists, we don’t have actually the knowledge to fix the algorithm. And even the people that actually created the algorithm sometimes don’t understand why it’s making the decisions that it’s making. And the second problem that I see is what I call time lags and time lags is that when we are deploying these AI solutions in our cities, we don’t see the effects and the impacts of those AI solutions right now in the present moment. But we may see later in time, in ten years, 20 years from now and somewhere else, you know. So yeah. So it’s not a positive.
Benjamin de la Peña Very quick question. Quick response. Yes. How do we keep people in the middle or in the centre?
Ariana Seferiades So I love what you’re saying. And beyond everything that I said before … So I think there’s many things we can do and I don’t like to end with this, you know, not really apocalyptic, sort of like sort of vibe. But I think you need to go harder on education, as I said before, and really take the time and invest in developing AI governance frameworks for your city and your municipality and include the community in the design, development and deployment of solutions that are meant to serve them. So as everyone already mentioned, and for me, most important of all, because I’m a social scientist working with computer scientists, so for me it’s generating the space and the platform to bring different backgrounds and disciplines together and generate these conversations because we all, as much as we are experts in one thing, we have a partial understanding of what cities are, how actually AI is being integrated and modifying the landscape and ecosystem in the city. So bring different types of experts and generate those conversations and dialogs like the one we are having here at this conference.
Benjamin de la Peña Carol. And we have a few more minutes. Right. But you know, the highway and that metaphor and …
Carol McClellan So the highway metaphor and I think Ariana …
Benjamin de la Peña And picking up with Ariana … How do you keep people in the middle of it …
Carol McClellan In the loop? Well, yeah, I’m that hopeless optimist that thinks that AI will you know, like there’s a lot of theories that AI will take over one day, but I don’t necessarily agree with that. I think that, you know, AI is something that humans have have developed, have invented. And I think AI is always going to be something that we use to better our lives. Right? So to improve our conditions, to understand our environments, to understand those interconnectedness and all that kind of stuff. And so I think it’s always going to need humans in the loop to interpret and to understand and to confirm that what’s coming out of the AI isn’t just a hallucination or, or whatever. Right. So … but on Ariana’s point about, you know, like, how do we make sure that, you know, AI is going to become more and more integrated into everything that we do, into our systems, into our operations, into our everyday lives? And one thing that we need to be very careful of is making sure that we’re treating it like the critical infrastructure that it’s going to become. Right? And so we’re seeing things about, you know, like highways are critical infrastructure, public transit is critical infrastructure, energy, all of those things are critical. And we need to make sure that the AI is resilient And to Myrna’s, you know, points about sustainability and, you know, like how data centers are not necessarily seen as the most sustainable thing. We need to think about, you know, that cradle to grave. What are all of the different impacts of AI and how do we make sure it’s resilient?
Benjamin de la Peña Rupen, take us to home and how do we get past the hype cycle now?
Rupen Seoni I’m going to bring it quickly to, you know, very specifically to the, you know, the public realm. I think data governance is going to be a critical thing that public sector has to really wrap its head around and not shoot itself in the foot because there’s so many instances, I mean, a quick example is, you know, the Presto card that’s used in the GTA to pay transit fares. The transit agencies have great difficulty getting their own data out of that to be able to do anything and do analysis. And we run into that trying to help them do work with their ridership. I mean, this is not serving anybody well. And this is only going to expand if we’re trying to deal with intersectionality and use AI effectively across different types of information. We have to figure this out because otherwise we aren’t going to be able to solve the problems in the way that we want to, if we can’t put the data together properly. Maybe health care has a good model to follow because they seem to have figured that out over the years where there are, you know, ways of putting health data together and liberating it in a privacy compliant way to allow analysis. I don’t know. But this is a …. it’s a big issue, I think. The second thing is fix procurement. Oh my gosh. It’s really … it stifles innovation so much. You can’t procure innovation like you procure, you know, hand soap for the washrooms. I mean, it’s got a and I’m sorry to be so flippant, but it’s got to, you know, you’ve got small companies that are trying to innovate and it’s exhausting and it’s impossible to try to respond. I mean, there was that article that talked about architecture in Canadian cities a couple of weeks ago and it was talking about procurement is really the source of the problem, why we get mediocrity. And I think that’s kind of where we’re headed in other realms as well. So I’ll stop there.
Benjamin de la Peña Any of you want to invest in a conference called AI for Procurement? Talk to me about it. We’re a little bit over, but this is the quick round, three words … City and national leaders … The three words you would tell them about as they think about AI and data infrastructure. We’ll go the reverse direction. And so quickly, Carol.
Carol McClellan Three words. “Let’s do this” …
Benjamin de la Peña Let’s do this … cool … Rupen
Rupen Seoni I’ll echo that. Let’s do it. Yes, we need to experiment.
Benjamin de la Peña Okay. Myrna
Myrna Bittner It’s a tool.
Benjamin de la Peña It’s a tool.
Ariana Seferiades Education, Governance. Experimentation.
Benjamin de la Peña Fantastic. And on that note, help me thank my wonderful panel.