I am responding late to a T-SQL Tuesday invite from John Sterrett. John’s call is about various ways to grow young data community/speakers.
I’m going to take a brief detour to talk about what held us together as a community over the past two decades.
We worked on a fantastic product – Microsoft SQL Server. It was thriving, growing in leaps and bounds. Each new release brought exciting features that sparked dialogues, blog posts and in-depth conversations. Jobs were plentiful if you had expertise in even one area of this vast product. We referred each other for job roles, building strong professional ties.
We saw each other often – at SQL Saturdays, PASS Summit, and other events. Between 2005 and 2018, I averaged about five events per year. We saw familiar faces and had plenty to talk about: the latest release, what worked and what didn’t, who’s hiring, who’s moving where. PASS had its fair share of politics, which added to the chatter. Twitter/X was our central hub – we knew who was attending events, where the after-parties were, and whose blogs to follow.
Then came COVID. Many of us shifted to working from home. PASS dissolved, giving way to smaller, independent or Azure-linked user groups. Some disappeared entirely. Event funding dropped and never really bounced back. SQL Server matured – still solid, but with fewer shiny new features. Twitter/X changed hands and tone, becoming more political, pushing many, even long-time influencers, away.
Meanwhile, job descriptions changed. SQL Server expertise wasn’t enough. Employers now ask for Postgres, Python, CI/CD, and more.
My late friend Brian Moran used to say that as we age, our “outer circle” gets bigger, while our “inner circle” – those we truly trust – shrinks. I found this painfully true during COVID. Pre-COVID, I had a long list of people to catch up with at events. Post-COVID, I realized many were just contacts. I don’t come from a culture that views friendship as transactional. That, combined with the discovery that many people didn’t care as much as I thought they did, left me in a difficult place.
Why didn’t they care? Partly because the West tends to treat relationships transactionally. And partly because the reasons for our interactions – events, jobs, shared tools – weren’t there anymore.
So what’s next? Is this the end of what we call “community”? I hope not.
In these tougher years, I’ve made new friends among younger speakers. I’ve learned how to support them – and be supported in return. Here are a few things that helped me:
Actively seek out and befriend new faces. Podcasts like Finding Data Friends by Ben Weissman and Jess Pomfret are great starting points. LinkedIn is another good space. Remember – tech today is much broader than SQL Server. I follow blogs on diversity, mental health, analytics, AI, and more.
Attend at least one event per year. If that’s not feasible, join a local user group. If that’s still tough, try a virtual event. I’m lucky to still attend PASS Summit and local meetups when I can.
Show genuine interest in people. COVID taught me that conversations based solely on tech or politics are fleeting. Regardless of cultural norms, people crave authentic connection. Ask how someone is doing – and mean it.
What’s positive about today’s community?
There’s far more diversity now.
Conversations feel smoother – even without shared tech or politics.
The younger generation is self-aware, clear on what works for them, and eager to extract value from their contributions.
Lots to learn, even for an old geek like me.
So, to answer John’s question about how to grow community: find what already exists, and participate – however you can. Real growth comes from real human connection.
AI is considered the new superpower. The adoption of AI in various capacities is at 72% across industries, worldwide, according to one study, and it does not show signs of slowing down. Meanwhile, concerns about ethical issues surrounding AI are also high. According to a Pew Research report published in April 2025, more than 60% of the general public polled expressed concerns about misinformation, the security of their data, and bias or discrimination. As database technologists and software developers, we play a crucial role in this evolution. A 2024 GitHub research survey indicated that more than 97% of respondents were already using AI for coding. Many of us may also be involved in developing AI-based software in various forms. But how aware and conscious are we of ethical issues surrounding AI? Granted, our usage of AI may be driven by work-related reasons, but what about our own personal stances? Are we aware of ethical issues, and do these issues factor into our perception of AI in any way?
Studies reveal that developers exhibit only moderate familiarity with ethical frameworks, including fairness, transparency, and accountability. According to a 2025 survey of 874 AI incidents, 32.1% of developer participants had taken no action to address ethical challenges. (Zahedi, Jiang, Davis, 2025). Another study in 2024 proved the need for ‘comprehensive frameworks to fully operationalize Responsible AI principles in software engineering.’(Leca, Bento,Santos, 2024).
The purpose of this blog post is to look at ethical concerns related to AI as expressed by developers in the Stack Overflow Developer Survey, 2024.
The dataset comprises 41,999 observations (after cleansing for individuals under 18 and those without stances on AI) across developers in 181 countries. After the transformations, it appears as follows.
The questions I want to analyze, with the concerned variables, are as follows.
1. How do ethical concerns correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI) as related to the potential predictor (The six ethical concerns- biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones).
2. How does productivity as a gain correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI), as related to the potential predictor (Productivity Gain).
3 How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is? The outcome (Stance on AI) as related to the potential predictor (Productivity Gain), along with the six ethical concerns.
4 How does bias as an ethical issue and the age of the developer relate to the stance of AI? The outcome (Stance on AI) as related to bias as an ethical issue, along with the respondent’s age bracket.
Methodology
The outcome being analyzed for all four questions is the AI stance, a Likert scale with 5 values in increasing order. (This is a dummy variable created based on verbiage-based responses in the original.) The ‘predictor’ variables, or the ones whose impact we are analyzing (Ethical values and productivity), are binary. Age, which is the variable considered in the last question, is a categorical one with age brackets. I have used ‘odds ratios’ and ‘predicted probability’ to explain findings, as they are simple and easy to understand. ‘Odds Ratio’ means the chances or odds of a favorable AI Stance over a neutral or unfavorable one. Predicted probability is the chance of an event (in this case, a change in stance on AI) happening out of all possibilities.
Descriptive Statistics of the dataset
The top ten countries with respondents are as follows, with the US having a significantly high # of people. This may be since the US has a significantly high number of developers in general. It also means views overall may be mainly skewed in favor of US respondents. For this analysis, I have not filtered the dataset by country, although this may be a worthwhile consideration for the next phase.
Stances on AI were overwhelmingly positive, with nearly half the respondents (48.2%) rating it as most favorable. Just 1.2% rated it as very unfavorable, with the rest falling between the two extremes.
65% of respondents reported productivity gains with AI.
There were a total of six ethical concerns: biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones. (Some more were too custom to be included for analysis.) The majority of respondents had more than one ethical concern. Misinformation (25.8%) and lack of attribution (21.03%) ranked highest among the concerns. Very few respondents (2.11%) had no ethical concerns.
Question 1: How do ethical concerns correlate to how favorable or unfavorable the stance is?
I decided to group all the ethical concerns and weigh them against the stance. This is because most respondents have multiple ethical concerns. I also verified whether concerns overlap (i.e., the impact of one ethical concern is addressed by another – a term in statistics called ‘multicollinearity’). This was not the case, as demonstrated by the image below. (Values on the boxes are minimal compared to 1).
The results of the analysis were as follows.
Data Source: 2024 Stack Overflow Developer Survey
Odds are the ratio of an event happening to its not happening. In our case, the ‘event’ is a possibility of a lower stance. Except for biased results, all other ethical concerns have odds of less than 1, indicating a less favorable stance. Even with biased results, there is only a slight increase in stance, and that may be related to other factors we have not considered. Energy demand seems to have the highest correlation to lowered stances.
Question 2: How does productivity as a gain correlate to how favorable or unfavorable the stance is?
Predicted probability is the chance of an event (in this case, a change in stance on AI) happening out of all possibilities. The graph shows that the chances of a high stance (4 or 5) have a high probability of achieving higher productivity gains (tall green bars). However, it also shows that these stances are taken by those with no productivity gains (the red bars are also high for stance 4, although not very high for stance 5). Many people with no productivity gains exhibit a moderate stance (tall red bar with 3).
Data Source: 2024 Stack Overflow Developer Survey
People in the oldest age bracket (over 65 years old) appear to take a less favorable stance, which seems significantly higher compared to the youngest age bracket of 25-34 years old. How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
Question 3: How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
This question examines how stances on AI change when considering both ethical issues and productivity factors. Out of the six ethical issues, I chose two – concerns around bias and misinformation. The charts are as below, and were mostly similar. There are 4 buckets the data falls into – 1 Those with gains and concerns (red) 2 Those with gains and no concerns (green) 3 Those with no gains and concerns and (blue) 4 Those with no gains and no concerns. (purple)
Data Source: 2024 Stack Overflow Developer Survey
All things being equal, those with gains and concerns (red bar) show highly favorable stances (4 or 5). All things being equal, those with gains and no concerns (green bar) also show neutral to favorable, but not highly favorable – perhaps other factors related to usage may be at play here. All things being equal, those with no gains and concerns tend to be moderate to favorable, with some also being less favorable (blue bar). All things being equal, those with no gains and no concerns seem to lean towards neutral to favorable. (purple).
It may seem odd that those with no gains and no concerns seem to have favorable stances. There may be other variables at play here that we have not considered, such as gains other than productivity, for example. This again is something to examine during the next phase of analysis.
Overall, productivity gains appear to show more favorable stances (green and red bars).
Question 4: How does bias as an ethical issue and the age of the developer relate to the stance of AI?
Adding age to the bias as an ethical issue and analyzing it with stances on AI is presented below.
Source: 2024 Stack Overflow Developer Survey
All things being equal, the odds of people in the oldest age bracket (over 65 years old) taking a less favorable stance seem significantly higher compared to those in the youngest age bracket (25-34 years old).
Results
1 How do ethical concerns correlate to how favorable or unfavorable the stance is?
This analysis focused on the outcome (Stance on AI) as correlated to the potential predictor (the six ethical concerns: biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones). Energy Demand as a concern appeared to have the highest correlation to less favorable stances. All other ethical issues exhibited a correlation with less favorable stances, except for bias, which showed a slightly positive correlation.
2. How does productivity as a gain correlate to how favorable or unfavorable the stance is?
This analysis focused on the outcome (Stance on AI) as related to the potential predictor (Productivity Gain). Productivity gains are associated significantly with higher stances, although the lack of those doesn’t necessarily mean lower stances.
3 How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is? This question led to analyzing the outcome (Stance on AI) in relation to the potential predictor (Productivity Gain), along with the six ethical concerns.
Those with gains and concerns show highly favorable stances. Those with gains and no concerns exhibit neutral to favorable attitudes, but not highly favorable ones. Those with no gains and concerns seem moderate to favorable, with some also expressing less favorable views. Those with no gains and no concerns seem to lean neutral to favorable.
4 How does bias as an ethical issue and the age of the developer relate to the stance of AI? The last analysis examined the outcome (Stance on AI) as related to bias as an ethical issue, considering the respondent’s age bracket. People in the oldest age bracket (over 65 years old) appear to take a less favorable stance, which seems significantly higher compared to the youngest age bracket of 25-34 years old.
Key findings summarized
The majority of respondents have expressed ethical issues.
Energy Demand as a concern appeared to have the highest correlation to less favorable stances.
All other ethical issues had a correlation with less favorable stances, except bias.
Productivity gains seemed associated with higher stances despite ethical concerns.
Bias and misinformation as concerns do not appear to significantly impact higher stances.
Favorable stances appear to be high overall, regardless of productivity or ethical issues.
Further work Examine the impact of other gains besides productivity. Filter the dataset by specific countries for more insight into country-specific data.
Limitations
It is critical to bear in mind that correlation <> causation, and that favorable or less favorable stances do not necessarily have to reflect ethical concern or lack of it. However, given the patterns found, it is worth researching further to explore possible deeper relations to demographics (country, age), and also filtering the dataset by specific countries to gain more insight. The dataset is also limited to developers, not specifically those working on AI, although some of them may be. Perspectives and findings may vary with a dataset of AI Developers. The dataset is also heavily skewed in terms of respondents from the USA compared to those from other countries.
Gao, H., Zahedi, M., Jiang, W., Lin, H. Y., Davis, J., & Treude, C. (2025). AI Safety in the Eyes of the Downstream Developer: A First Look at Concerns, Practices, and Challenges.