This month’s TSQL Tuesday invite is from my good friend, long standing MVP and community volunteer Taiob Ali – Taiob’s call is to blog on how AI, (the biggest invention since the internet, according to some) is changing our careers.
The place I work is passionate about AI adoption. We are exploring many tools in that regard. I may not be able to share details of exact usage for privacy reasons. These are my personal experiences.
How I use it personally
I have not played with many AI tools. I use a paid version of ChatGPT, which I find helpful for the following reasons.
1 To generate small amounts of test data for demo and other reasons. It is very good at this – especially if I can provide table(s) and ask it to generate Insert statements. 2 To review my blog post and ask for suggestions on English, or if it matches the tone I have in mind. 3 For occasional art generation – such as some thank you card logos for events like SQL Saturday. I have had some experiences there that I can blog about separately. 4 For simplifying complex text in research papers – I must read a lot of research papers for school, and sometimes the language is too hard for me to follow. So, I ask for help with one paragraph at a time. It is not capable of condensing all of it. By the way, it gets worse with more data, and it has little memory for what you asked earlier. Even with these limitations, it can be helpful. 5 For assistance with R Programming. Maybe because R is an open-source product, the help you can get is fantastic and saves you hours. I do not cut and paste any code; I ask it specific questions like ‘how do I increase font on legend with this scatterplot’.
My experience so far is that it is another tool in my toolkit. While I have not had transformative experiences yet, it has proven to be helpful in my daily work. I also have thoughts on ethical challenges and concerns with mental health that it causes. For my own sanity – I do not address it or talk to it like a person. Nor do I engage in experiments like some do – like debate it on what it says or try to get some cool answer they can share on social media about it. It is strongly ‘it’ to me, not a person with reasoning skills or feelings.
Some of the links that I have found helpful in that regard are below.
Resources that I recommend.
Harvard neuroscientist Dr Srini Pillay’s interview on balanced usage of AI, with warnings on the impact on the brain if used too much. This is not a pro or anti AI talk – it is very pragmatic and eye-opening on how much and why we need to use these tools. Dr Pillay explains how to find balance in the confusing world we are in by using AI appropriately and paving the way for healthy, innovative outcomes.
A research study shared by my colleague Mark Wilkinson found that AI does not necessarily improve productivity among programmers. This study is based on a small sample of programmers and has interesting findings related to higher productivity.
Stack Overflow 2025 survey results related to AI – of particular interest is the # of people using it at work, challenges with trust, AI tools versus AI agent usage. Also has a dataset we can use to explore further – the largest dataset of developer opinions available.
Long-time SQL Server MVP and data scientist Kevin Feasel, who is my go-to guy for all things data science related, wisely pointed out that Generative AI is hardly the only form of AI. It is easy to forget this critical fact, given that the term AI is used to refer to just generative AI these days. Here is a blog post teaching us about other forms of AI.
Ethical considerations
Last but hardly least, there are lots of ethical issues surrounding AI. My humble research using Stack Overflow data from last year (still a work in progress) is here. I follow an Australian researcher named Kate Crawford, who has written a fantastic book called ‘Atlas of AI’. She highlights what goes into AI in the form of environmental resources, cheap labor, and many other factors. She also has many talks on YouTube that are worth listening to.
Data Platform MVP and longtime volunteer/mentor Eugene Meidinger has a great post on AI Ethics about Power BI. I loved one of his quotes – to always paste ‘into’ it and not ‘out of ‘it.
Conclusion
All of this said, AI is a game-changer, like it or not. There are basically two strong stances about its future – one that thinks it will die down, if not go away, because of how much garbage goes into it over time, and the second that says it will pave the way for a new future. Most of us are, safe to say, in the middle and confused about where we are going to land with it. My own stance – use it limitedly, stay informed and rely on educated resources, be open to possibilities, and stay grounded in your ethical stances.
AI is considered the new superpower. The adoption of AI in various capacities is at 72% across industries, worldwide, according to one study, and it does not show signs of slowing down. Meanwhile, concerns about ethical issues surrounding AI are also high. According to a Pew Research report published in April 2025, more than 60% of the general public polled expressed concerns about misinformation, the security of their data, and bias or discrimination. As database technologists and software developers, we play a crucial role in this evolution. A 2024 GitHub research survey indicated that more than 97% of respondents were already using AI for coding. Many of us may also be involved in developing AI-based software in various forms. But how aware and conscious are we of ethical issues surrounding AI? Granted, our usage of AI may be driven by work-related reasons, but what about our own personal stances? Are we aware of ethical issues, and do these issues factor into our perception of AI in any way?
Studies reveal that developers exhibit only moderate familiarity with ethical frameworks, including fairness, transparency, and accountability. According to a 2025 survey of 874 AI incidents, 32.1% of developer participants had taken no action to address ethical challenges. (Zahedi, Jiang, Davis, 2025). Another study in 2024 proved the need for ‘comprehensive frameworks to fully operationalize Responsible AI principles in software engineering.’(Leca, Bento,Santos, 2024).
The purpose of this blog post is to look at ethical concerns related to AI as expressed by developers in the Stack Overflow Developer Survey, 2024.
The dataset comprises 41,999 observations (after cleansing for individuals under 18 and those without stances on AI) across developers in 181 countries. After the transformations, it appears as follows.
The questions I want to analyze, with the concerned variables, are as follows.
1. How do ethical concerns correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI) as related to the potential predictor (The six ethical concerns- biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones).
2. How does productivity as a gain correlate to how favorable or unfavorable the stance is?
The outcome (Stance on AI), as related to the potential predictor (Productivity Gain).
3 How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is? The outcome (Stance on AI) as related to the potential predictor (Productivity Gain), along with the six ethical concerns.
4 How does bias as an ethical issue and the age of the developer relate to the stance of AI? The outcome (Stance on AI) as related to bias as an ethical issue, along with the respondent’s age bracket.
Methodology
The outcome being analyzed for all four questions is the AI stance, a Likert scale with 5 values in increasing order. (This is a dummy variable created based on verbiage-based responses in the original.) The ‘predictor’ variables, or the ones whose impact we are analyzing (Ethical values and productivity), are binary. Age, which is the variable considered in the last question, is a categorical one with age brackets. I have used ‘odds ratios’ and ‘predicted probability’ to explain findings, as they are simple and easy to understand. ‘Odds Ratio’ means the chances or odds of a favorable AI Stance over a neutral or unfavorable one. Predicted probability is the chance of an event (in this case, a change in stance on AI) happening out of all possibilities.
Descriptive Statistics of the dataset
The top ten countries with respondents are as follows, with the US having a significantly high # of people. This may be since the US has a significantly high number of developers in general. It also means views overall may be mainly skewed in favor of US respondents. For this analysis, I have not filtered the dataset by country, although this may be a worthwhile consideration for the next phase.
Stances on AI were overwhelmingly positive, with nearly half the respondents (48.2%) rating it as most favorable. Just 1.2% rated it as very unfavorable, with the rest falling between the two extremes.
65% of respondents reported productivity gains with AI.
There were a total of six ethical concerns: biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones. (Some more were too custom to be included for analysis.) The majority of respondents had more than one ethical concern. Misinformation (25.8%) and lack of attribution (21.03%) ranked highest among the concerns. Very few respondents (2.11%) had no ethical concerns.
Question 1: How do ethical concerns correlate to how favorable or unfavorable the stance is?
I decided to group all the ethical concerns and weigh them against the stance. This is because most respondents have multiple ethical concerns. I also verified whether concerns overlap (i.e., the impact of one ethical concern is addressed by another – a term in statistics called ‘multicollinearity’). This was not the case, as demonstrated by the image below. (Values on the boxes are minimal compared to 1).
The results of the analysis were as follows.
Data Source: 2024 Stack Overflow Developer Survey
Odds are the ratio of an event happening to its not happening. In our case, the ‘event’ is a possibility of a lower stance. Except for biased results, all other ethical concerns have odds of less than 1, indicating a less favorable stance. Even with biased results, there is only a slight increase in stance, and that may be related to other factors we have not considered. Energy demand seems to have the highest correlation to lowered stances.
Question 2: How does productivity as a gain correlate to how favorable or unfavorable the stance is?
Predicted probability is the chance of an event (in this case, a change in stance on AI) happening out of all possibilities. The graph shows that the chances of a high stance (4 or 5) have a high probability of achieving higher productivity gains (tall green bars). However, it also shows that these stances are taken by those with no productivity gains (the red bars are also high for stance 4, although not very high for stance 5). Many people with no productivity gains exhibit a moderate stance (tall red bar with 3).
Data Source: 2024 Stack Overflow Developer Survey
People in the oldest age bracket (over 65 years old) appear to take a less favorable stance, which seems significantly higher compared to the youngest age bracket of 25-34 years old. How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
Question 3: How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is?
This question examines how stances on AI change when considering both ethical issues and productivity factors. Out of the six ethical issues, I chose two – concerns around bias and misinformation. The charts are as below, and were mostly similar. There are 4 buckets the data falls into – 1 Those with gains and concerns (red) 2 Those with gains and no concerns (green) 3 Those with no gains and concerns and (blue) 4 Those with no gains and no concerns. (purple)
Data Source: 2024 Stack Overflow Developer Survey
All things being equal, those with gains and concerns (red bar) show highly favorable stances (4 or 5). All things being equal, those with gains and no concerns (green bar) also show neutral to favorable, but not highly favorable – perhaps other factors related to usage may be at play here. All things being equal, those with no gains and concerns tend to be moderate to favorable, with some also being less favorable (blue bar). All things being equal, those with no gains and no concerns seem to lean towards neutral to favorable. (purple).
It may seem odd that those with no gains and no concerns seem to have favorable stances. There may be other variables at play here that we have not considered, such as gains other than productivity, for example. This again is something to examine during the next phase of analysis.
Overall, productivity gains appear to show more favorable stances (green and red bars).
Question 4: How does bias as an ethical issue and the age of the developer relate to the stance of AI?
Adding age to the bias as an ethical issue and analyzing it with stances on AI is presented below.
Source: 2024 Stack Overflow Developer Survey
All things being equal, the odds of people in the oldest age bracket (over 65 years old) taking a less favorable stance seem significantly higher compared to those in the youngest age bracket (25-34 years old).
Results
1 How do ethical concerns correlate to how favorable or unfavorable the stance is?
This analysis focused on the outcome (Stance on AI) as correlated to the potential predictor (the six ethical concerns: biased results, misinformation, lack of attribution, energy demand, impersonation or likeness, and the potential for replacing jobs without creating new ones). Energy Demand as a concern appeared to have the highest correlation to less favorable stances. All other ethical issues exhibited a correlation with less favorable stances, except for bias, which showed a slightly positive correlation.
2. How does productivity as a gain correlate to how favorable or unfavorable the stance is?
This analysis focused on the outcome (Stance on AI) as related to the potential predictor (Productivity Gain). Productivity gains are associated significantly with higher stances, although the lack of those doesn’t necessarily mean lower stances.
3 How does productivity as a gain, combined with ethical concerns, correlate to how favorable or unfavorable the stance is? This question led to analyzing the outcome (Stance on AI) in relation to the potential predictor (Productivity Gain), along with the six ethical concerns.
Those with gains and concerns show highly favorable stances. Those with gains and no concerns exhibit neutral to favorable attitudes, but not highly favorable ones. Those with no gains and concerns seem moderate to favorable, with some also expressing less favorable views. Those with no gains and no concerns seem to lean neutral to favorable.
4 How does bias as an ethical issue and the age of the developer relate to the stance of AI? The last analysis examined the outcome (Stance on AI) as related to bias as an ethical issue, considering the respondent’s age bracket. People in the oldest age bracket (over 65 years old) appear to take a less favorable stance, which seems significantly higher compared to the youngest age bracket of 25-34 years old.
Key findings summarized
The majority of respondents have expressed ethical issues.
Energy Demand as a concern appeared to have the highest correlation to less favorable stances.
All other ethical issues had a correlation with less favorable stances, except bias.
Productivity gains seemed associated with higher stances despite ethical concerns.
Bias and misinformation as concerns do not appear to significantly impact higher stances.
Favorable stances appear to be high overall, regardless of productivity or ethical issues.
Further work Examine the impact of other gains besides productivity. Filter the dataset by specific countries for more insight into country-specific data.
Limitations
It is critical to bear in mind that correlation <> causation, and that favorable or less favorable stances do not necessarily have to reflect ethical concern or lack of it. However, given the patterns found, it is worth researching further to explore possible deeper relations to demographics (country, age), and also filtering the dataset by specific countries to gain more insight. The dataset is also limited to developers, not specifically those working on AI, although some of them may be. Perspectives and findings may vary with a dataset of AI Developers. The dataset is also heavily skewed in terms of respondents from the USA compared to those from other countries.
Gao, H., Zahedi, M., Jiang, W., Lin, H. Y., Davis, J., & Treude, C. (2025). AI Safety in the Eyes of the Downstream Developer: A First Look at Concerns, Practices, and Challenges.
I attended the PASS Data Community Summit held in Seattle in person this year after a long gap of 4 years and after RedGate software took over running the summit.
The place I work at had stopped paying for in-person training – making it an expensive decision to attend if I wanted to. I had not submitted to speak or planned on attending until about August when my boss found a backlog of unused vacation I had and needed to use before the year ended. I had plenty of vacation, was also able to secure airline tickets based on my points, got affordable Airbnb accommodation close to the convention center, and booked a trip to India after the summit. In short, it was meant to happen, and it did.
Some specific observations are as below.
1. The new convention center was an amazing location. The distance to classrooms was optimal and not a hike like at the older place. It was a modern building with several areas to sit around, and network, and huge glass panes let in sunlight. It made for a great experience.
2 RedGate did an amazing job with organizing. Everything was very smooth, starting with registration. There were many opportunities to network, even if one was not a party or late-night person. Coffee and tea were set up all day until 5 p.m. Friday. 3 The ‘Experts’ clinic, which replaced MSFT’s SQL clinic, was staffed by MVPs/consultants and seemed a huge success. People lined up all day and seemed to get the answers they needed. 4 There were many case study presentations—moves to AWS/Azure seemed to make for several. 5. I was invited to one of several closed-door discussions on tech careers, managing data in the cloud, and other topics. Several people expressed frustrations about hasty moves to the cloud and how much they cost their company. Some felt these costs were passed down as pay cuts and low salaries. I was also selected to be interviewed by Louis Davidson, one of my #sqlheroes and among the senior community members I look up to. It was a great conversation. 6 The ‘community zone’ was set up away from dining rooms and classrooms, making it a place specifically for people to hang out and engage in conversation. It worked amazingly well. There were informal sessions here, too—I got to do one on mentoring and community with my good friend Chris Yates and greatly enjoyed it. There were sessions on hobbies and various fun activities here, too.
7 Very few MSFT employees were present on site. Several took time to drive in to meet friends on their own. I was very touched personally that they took time for me. I hope the formal presence of MSFT will improve at future summits; if not, the conference will take on a very different shape. For the first time in history, the new SQL server version was not announced at the Summit. 8 RedGate put up a Postgres conference in the same venue for half the cost. Both conferences shared the vendor area. It was a good move and made it possible for me to meet some amazing people – particularly Adam Machanic, one of my sqlheroes, and Karen Schuler, my good friend and long time community volunteer from Louisville.
9 I had a list of newer folks in the community whom I wanted to meet. I ended up meeting many more. It was a positive experience, and I felt good about the future of the community—although it would be very different from the one I was used to.
Observations not directly related to summit
1. Many people I talked to felt confused and worried about the job market. SQL Server, as a hardcore technical skill, seemed less in demand, although it is very much around. The ‘other’ skills needed were spread over a wide spectrum, ranging from NoSQL Platforms to AI technologies. The pay was much lower than five years ago, and in-person work is primarily expected.
2 Several people felt that SQL Server as product was not getting as much love from MSFT compared to Fabric and AI. What that means for us career wise remains to be seen and to me , strongly related to how many years of work one has ahead before retirement. It is, for sure, time to adapt and learn a lot more stuff.
3 The loss of Twitter/X as the main networking platform was felt and missed deeply. The only social event I attended personally was the RedGate volunteer party. I had lunches/dinners with several friends in private and headed home early on most nights. Granted, this was a choice for me – but one did not even know of other social events, formal or informal, because there wasn’t a platform to communicate as a community any more.
4 Grant Fritchey talks of his word of choice to describe the year as ‘fragmented’. That would be my choice, too – especially with the community. Many people we used to hang out with have retired, moved on to doing other work, and many have intentionally limited their contacts. I realize that the ‘glue’ that kept us friends was community politics, common technology (SQL Server), and the many in-person events where we used to see each other – before 2020. The politics is different now. The technology has expanded to many other platforms, and in-person events are drastically low. That, combined with losing X, leads to a highly fragmented community.
I missed the older, bigger crowd – but the friends who sought me out and who I have now are those who want to stay in touch because they value me as a person over politics/technology/and other common talk topics. In other words, the ‘real’ people I want in my life. That is nothing but a good thing.
I hope to attend PASS Summit 2025 to deepen a few existing friendships and make newer connections, as well as learn and share our concerns about where we are heading. I want to thank RedGate Software sincerely for making me feel welcome and helping me participate in many ways.