TSQL Tuesday #081 – Sharpening something

SqlTuesday

This TSQL tuesday is hosted by my good friend Jason Brimhall – Jason has put forth a creative challenge – plan to do something, carry out that plan, and then write about the experience. He gave a two week period between planning and implementation. I must confess that i cheated a bit here, because what am writing about is not really a two week thing – but it is an opportunity for me to write on something I recently planned and carried out.
I have about a decade and a half of operational DBA experience as of this year. I was beginning to tire of the relentless repetitive nature of the work, and longed to do something different. I was reading a lot on BI/Analytics – although I have little background in BI – I was fascinated by what was currently possible with many tools available to analyze data and come up with interesting findings. I love painting dashboards, and I have a background in statistics as well. Talking to a few people in the community – it seemed to make sense to find my way into BI/Analytics. But here was was the catch. What was the industry to go into? Analytics is very industry specific – knowledge of data and its nuances being as important as knowledge of technical tools one dabbles with. Some thought and deliberation seemed to point to healthcare analytics as a good choice. So, I had a plan. To find a job that introduces me to the wide world of BI/healthcare analytics, and learn it as much as I can on my own.
After a 3 month search that involved many uncertainities and turning down many lucrative operational dba offers – I found a job where they actually needed a BI DBA – they were willing to let me learn BI if I brought my DBA skills to the table. They were also going into Tableau and analytics. And what is more, they do healthcare.
I took the job and then started working on upping my skills in the said area. I’ve joined a course in healthcare analytics. I use pluralsight to learn SQL Server BI, and udemy for R programming. I also blog once a week on something i learned with R on healthcare data. My blogging interest and following has picked up considerably.
That is in short, my mid life career transition – a rather big subject for a TSQL tuesday, but one that most people run into. I wish you best of luck – with patience and perseverance to find what you truly love to do. Thanks Jason, for the opportunity.

 

 

 

 

TSQL Tuesday #72 – Data Modeling gone wrong

SqlTuesday

My first TSQL Tuesday post on the new blog is an invite by my friend and sql sister Mickey Stuewe (b|t). It is regarding data modeling practices that are well avoided and how problems rising as a result of bad data models can be fixed.

I have several experiences to recount in this regard as my first ever job was with doing data modeling and design. I will pick one of them for this post. Several years ago, I was working for a startup company as a junior DBA. DBAs also had their say in how data models were designed, although we didn’t always get the final word on it. Since it was a new company working off its own software we had some freedom to design things as we wanted to. The issue we had on had was to do with some code tables. Or rather, a code table. We had tables for products, customers and invoices. Now, we had an issue with some other aspects of the business – we needed a few lookup tables. We had a product type, a customer type and an invoice type. In addition to that, we needed payment type, address type and a whole of other ‘types’. The structure of these were more or less similar – a code and a description, in most cases. In a truly 3rd normal design – each of these would have been their own table with a foreign key relationship into the table it was used in. Small tables, no doubt, but specifically intended to store the lookup data in question.The senior DBA to whom I was reporting to had an opinion that a lot of small tables were adding up to the schema getting large and therefore difficult to manage, and that the queries would get too ‘deep’ over time. So he made a call that we would have a generic ‘code table’. A simple denormalized table with three fields – a code, a code type, and a code description. All of our codes – product type, customer type, invoice type, address type – everything would go in that one table, with the code type telling us what it was, and a code description giving some additional information.Programmers were thrilled with the idea, since it meant they had only one table to work with for most of their queries. Initially, this code table only contained about a 100 or so records and seemed like a workable idea. But the business started to grow, and along with it the table too. We started running into issues as below:

1 Almost all the queries in the application were referring to this one table – leading to it getting hit too many times.
2 The queries got increasingly cryptic when they tried to refer to one specific range of records – for example, an address type was indicated by a code A, and their query would say ‘code_type = ‘A”..another query would have code type = ‘I’ and so on..when the programmer left nobody really knew what these letters meant. And yes, there was no third table to decode the meaning.
3 The business wanted to manage the data, and there was no easy way to give them an interface. The data was a jumble of many different things and could not be managed with one screen. The programming effort needed to give them multiple screens was huge.
4 Some of the ‘codes’ started needing additional attributes – phone number type, for example, was initially just cell phone or residential.Then, some customers started asking for a second residential line, so you had to say residential 1 or 2, or cell phone 1 or 2.
5 There were no foreign key relationships defined – so the dependency was on code to enforce the relationship. Code had bugs, and that resulted in bad data getting in very often.

In short, it got so very complicated that the CIO had to approve a project to normalize the whole thing. The rewrite involved creating several smaller tables, putting in the right foreign keys and relationships, and creating interfaces for the business to manage the data. The big lesson that came out of it was that the size of the schema or fewer tables do not really mean a simpler/easier to manager design, in fact it can be the exact opposite.

Good database design is the backbone of a well designed and well performing  application – always try to learn what goes into it. A great article by SQL Server MVP Louis Davidson describes several issues like this and is worth reading.

TSQL Tuesday 66 : Monitoring

This TSQL Tuesday post hosted by Cathrine Wilhelmson has to do with Monitoring.

During my many years as a DBA i have had a fair share of experience with monitoring tools – some of them as listed:

1 Spotlight for SQL Server – great tool for server level monitoring with very cool graphics. What i really liked about spotlight was that it was so very non-techie friendly – so whenever the director or VP chose to walk by and wants a peek into server health – all I needed to do was to pull up the screen and show him the dials whizzing around. On the flip side, spotlight did not help a whole lot with query tuning. It was also rather difficult to integrate with our ticketing system to generate tickets.

2 More recently, I have been working with Systems Center Operations Manager as our enterprise wide monitoring system. SCOM is a really complicated monitoring system to work with – but once you get familiar with it there is a lot it can do – in particular what I really liked was its ability to open tickets on its own within our ticketing system(with some customization of course), as well as close alerts on its own too when the alerting situation was resolved. SCOM calls for a dedicated admin  (which is part of my job), and a lot of tuning to get it down to where it only alerts on what you need without a whole lot of extra noise. SCOM is also NOT a tuning tool – it is only an alerting tool.

4 There are some situations that fall completely outside the realm of third party monitoring tools- for example, we need an alert whenever a user is added as a server role on some servers. We had to write a really simple script that would pull people who are on server roles, set it up as a job to email us every day in case something changed.

**Beginning of script – authored by Mala Mahadevan

SELECT role.name AS Name, member.name AS MemberName,
CASE WHEN (role.is_disabled) = 1 THEN ‘DISABLED’ WHEN (role.is_disabled) = 0 THEN ‘ENABLED’ END as     ‘Status’

INTO #temp_dbo
FROM sys.server_role_members

JOIN sys.server_principals AS role

ON sys.server_role_members.role_principal_id = role.principal_id

JOIN sys.server_principals AS member

ON sys.server_role_members.member_principal_id = member.principal_id AND

–filtering for logins that come from our domain
member.name LIKE ‘prod%’

–Format email

SET @tableHTML =    N'<H1>Server Role members on Server A</H1>’ +

N'<table border=”1″>’ +

N'<tr><th>Name</th><th>Role</th><th>Status</th>’

+

CAST ((

SELECT  td = CAST(([Name]) AS nvarchar(100)),”,

td = CAST(([Membername]) AS nvarchar(100)),”,

td = CAST(([Status]) AS nvarchar(100)),”

FROM #temp_DBO

ORDER BY name

FOR XML PATH(‘tr’), TYPE) AS NVARCHAR(MAX) ) +    N'</table>’
SET @Results = isnull(@tableHTML, ”)

–Email the team on findings
EXEC msdb.dbo.sp_send_dbmail

@recipients =

‘mydbateam@mydba.com’,

@subject =

‘Server Roles on Server A’,    @importance = ‘Normal’,    @body =

@Results,    @body_format = ‘HTML’

 

DROP TABLE #temp_DBO

TSQL Tuesday #61 – Giving back

This TSQL Tuesday is hosted by my friend Wayne Sheffield. The topic is on ‘giving back’..what do you think you can ‘give back’ to the sql community in the upcoming year?

When I was new to this country – I had a friend who was a psychotherapist by profession. Towards the holidays – when everyone took time off, went shopping, decorated homes and so on – she would be working double , sometimes triple shifts. She worked for herself, so nobody really made her do this. But it was the time when business boomed for her. More people were depressed and low towards the holidays causing more therapists to be in business. Now, I come from a country with a lot of poor people. Seeing more people depressed instead of happy during a festive season was news to me – but that is topic for another post. The point as she explained to me was that the need to give without adequate self nourishment made a lot of people feel very depleted and as a result, depressed and low. In some ways , our small community is also reflective of the bigger outside culture – we expect people to give more and more – volunteers need to do more, the people on PASS board need to do this and that, MVPs and friends who are doing well need to recommend jobs…on and on. What are we doing to ensure appreciation and self nourishment within our community?

I don’t mean prepping people up or tweeting about how much we love each other and so on , although those things are important. I mean things like looking out for someone who is out of work. If you find any opportunities that he/she may be interested – let them know. Put in a word of recommendation, if you can do it. Share your own stories, not just of successes, but of failures. Like the time when a technical interview was really hard and you knew in 10 minutes that you were a poor fit. Or the time when you wondered how having 500+ linkedin contacts is not leading to any successful job leads. Or when everyone is talking of the sqlserver job market booming and there is nothing remarkable happening in your neck-of-the-woods. We all have times like that. Sometimes, sharing those stories is the best thing we can do to help someone who is depleted. As I write this, I know of 4 people who are looking for work. They are all smart, hardworking, caring individuals who have given to community and are in need of self nourishment. My goal is to be there for them in whatever way I can. And for people like them, in the year to come.

TSQL Tuesday #60 – Something new learned..

This TSQL Tuesday is hosted by my good friend Chris Yates – the topic is on something new learned in the recent past. It is a simple topic but can sometimes be hard to articulate. I am a big believer in an open mind and learning anything new that comes my way – and have learned that the older you grow the harder this can be. The one practical lesson I claim to have learned, finally, in the recent past – is to sell myself and my work better. Or..to put it in other words, that publicity for what you learn and do is as important as the work itself.

I grew up in a culture that did not think too well of people – in particular women, being very extroverted. That combined with the fact that I was home-schooled for health reasons did not make me a huge extrovert by nature. My good friend Kevin Kline once did a survey of geeks and explained that most people who take to computers in a big way are, in some ways introverted. I don’t think being introverted has anything to do with self esteem, or even being shy or anti social. I just think these are preferences around how we are – that we prefer quality company over numbers, and are somewhat deliberate in our thoughts and actions. I do think though that it does impact how we sell ourselves – or publicise our work. For someone to know you you need to get out there and show them who you are – or it is more likely that those people who know far less than you do and drum up their work better will get the right chances to get ahead.

In my 13 years of attending the PASS Summit – I have always tried to spend time in classes, attending the occasional evening party. This time, I did something different. I spent most of my time networking – hanging out in community zone, asking people if they’d like to do lunches or dinners where we shared each other’s cares and concerns, and introducing myself to many SQL celebrities I had only known by name. I found the experience very rewarding – I got a lot of insight into options for career advancement, upcoming trends and changes in nature of work (more telecommute, more data analytics, more ‘unlimited’ vacation…) – not to mention the emotional support from a  huge number of #sqlhugs. I was selective, of course – in some ways, like I have always been. I usually hang with people whom I feel are genuine at some level, not just all about fluff, and are intuitively easy to relate to. That is something that I will want to keep.

My good friend Grant Fritchey once remarked  to someone who was a very strong techie and yet very humble and unassuming – ‘you need to be more assuming’ (he meant that you need to sell yourself more/better). That , in one sentence has been what I learnt recently – to be more ‘assuming’ while being true and genuine, at the same time.

TSQL Tuesday #59 – My hero(es) – The Quiet Achievers

I decided to resume my effort at consistent blogging again..after more than a year..with this week’s TSQL Tuesday Invite – this invite is by Tracy McGibben and is on ”Heroes”. There are many heroes in the SQL world who have inspired me and continue to. But there are two among them whom I wish to single out for this post.

1 Joseph Sack – I met Joe during a Sqlskills Immersion Event at Florida. I had known him to be the former director of MCM program before he joined SQLSkills, and I had a mental image of him to be atleast twice as older than he really was. Joe was much younger, very smart and thorough in what he knew, kind, funny and very easy to be around with. Soon after that event Paul Randal put out the yearly mentoring program that they do, and the candidates each of them had picked to mentor. I was long looking for a mentor – and was a bit saddened that I did not make it. I sent out a personal request to Joe – asking if he would mind mentoring me. Honestly did not expect to hear anything in the affirmative but he responded immediately – yes, he would be happy to. I spent a year on mentoring program with Joe – talking every month on things that mattered to me and asking for his advice, suggestions and like. After a few months we would up good friends who shared each other’s cares and worries rather than just a mentor and mentee. What impressed me about him initially are the same qualities that impress me still – humble,  down-to-earth, hard working and always willing to lend a helping hand. (Unfortunately Joe is no longer on social media or blogosphere for me to add any links). Proud to know you and be your friend, Joe.

2 Wayne Sheffield – I met Wayne as a speaker for one of my SQL Saturdays. That particular SQL  Saturday was not really a great event for me and my team. Two of my most senior volunteers had to depart to attend to some personal duties – I was very short on help and struggling to pull together with many things falling apart. One of the things that ‘fell apart’ were speaker shirts, which were misplaced. I was unable to find Wayne’s shirt – he had come with expectation that he would get one and needless to say, it was not the best experience for either of us(The shirt was found later). But the year after – I went on SQL Cruise Alaska, and joined Wayne and his family the day before for a tour of Seattle underground. During the cruise I got to know him much better and we are good friends. I have since followed his journey towards the MCM and various job changes too.He has  always been there to help me – with SQL advice, professional help, or just lend a listening shoulder. Like Joe – the qualities I admire with Wayne are his patience, humility, hard work, down-to-earthiness and willingness to help.

‘A person is known by the friends they keep’ is an old saying – so if people know me by my friends such as these, am sure I am known well 🙂 As a last word – would like to recommend everyone to find  people they vibe well with, not just people who are celebrities or big names. Find people who understand who you are, and are willing to support, encourage and motivate you. And do the same for them. Attend programs like SQL Skills or the SQL Cruise – they are not just for learning, I  have found some of my best friends this way.

TSQL Tuesday #31: Logging: Case of the Missing LSN

 This is in response to Aaron Nelson’s call for T SQL Tuesday – on the subject of Logging. There are many things that come to a DBA’s mind while talking of logging. For this I chose an incident that happened some years ago. I had joined a new job, and the dba who had taken my spot had been there for several years before she left. She had set up most of the maintenence jobs on existing servers, most of which were on SQL Server 2000 then.

On the second day of my job I was asked to restore a backup as of a certain date/time to a development server to recover some data for an application manager. I was directed to the place where backups were stored. I found the full backup, and hourly transaction logs following that. As I set about restoring, I got a certain log that gave me a missing LSN error. The log job was set to run hourly, and the timestamps showed all logs were there and there were no errors on the job. I looked further to see if anything else was going on at that time when the log was backed up – I found a reindexing job in which the recovery model was being set to simple and back to full again. In 2005 and above if this change is done – the log chain is broken and one gets an error that a full backup is needed before restarting it. But in 2000 there was no such thing. The log chain was still broken but the job was not throwing any errors to show that. The backups were just not usable.
The process had been in place for more than 4 years, and luckily there had been no requirements to use the transaction log backups during this time interval. They even had a testing mechanism for testing backups, but they did not test backups during this particular sequence of operations. After that episode I also changed the testing process to work through job cycle – through all jobs and maintenence windows, in case there was any issue. It was a great lesson for me on backup testing and how many issues can remain dormant for years until one gets them accidentally. Luckily it was not a crisis situation when it was discovered but it might very well have been. Test your backups, and through jobs and processes as necssary, not just during small windows.

T-SQL Tuesday #025 – Invitation to Share Your Tricks

This is my first response to a TSQL Tuesday.

There are lot of things in TSQL that can be done to make DBA’s life easier. We have a large collection of scripts. These are some I have used over the years at multiple jobs – there are several more but I don’t particularly want to write a very long post. Also a disclaimer that I did not write all of them.  I wrote some,  I tweaked some I found on sites like SQLServerCentral.com that have come in very handy. I apologize ahead for not offering credit since I have not kept track of the authors.

–Find the last server start date(2005)
SELECT crdate AS Datetime_Started FROM sysdatabases WHERE name = ‘tempdb’

1 –Find Blank passwords
SELECT name FROM sys.syslogins WHERE pwdcompare(””, password) = 1

2 –Find members of a role (for example, public)
SELECT a.[name] + ‘ ‘ + v.[name] + ‘ ON ‘ + QuoteName(oo.[name]) + ‘.’ + QuoteName(o.[name]) + ‘ TO ‘ + QuoteName(u.[name])
FROM dbo.sysprotects AS p JOIN master.dbo.spt_values AS a
ON (a.number = p.protecttype AND ‘T’ = a.type) JOIN master.dbo.spt_values
AS v ON (v.number = p.action AND ‘T’ = v.type) JOIN dbo.sysobjects AS o ON (o.id = p.id)
JOIN dbo.sysusers AS oo ON (oo.uid = o.uid) JOIN dbo.sysusers AS u ON (u.uid = p.uid) WHERE ‘public’ = u.name

3 –Find number of times a stored procedure was used since last reboot
SELECT DB_NAME(st.dbid) DBName
,OBJECT_SCHEMA_NAME(st.objectid,dbid) SchemaName
,OBJECT_NAME(st.objectid,dbid) StoredProcedure
,max(cp.usecounts) Execution_count
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
where DB_NAME(st.dbid) is not null and cp.objtype = ‘proc’
group by cp.plan_handle, DB_NAME(st.dbid),
OBJECT_SCHEMA_NAME(objectid,st.dbid),
OBJECT_NAME(objectid,st.dbid)
order by max(cp.usecounts)

4 –Stick in anything to be changed across all dbs (just used sp_dboption with read only here)
SET NOCOUNT ON

— Get the name of all databases
DECLARE UserDatabases CURSOR FOR

SELECT name FROM master..sysdatabases
where name not in (‘master’,’tempdb’, ‘model’, ‘msdb’, ‘distribution’)

— Open Cursor
OPEN UserDatabases

— Define variables needed
DECLARE @DBase NVARCHAR(128)
DECLARE @COMMAND NVARCHAR(128)

— Get First database
FETCH NEXT FROM USerDatabases INTO @DBase

— Process until no more databases
WHILE (@@FETCH_STATUS = 0)
BEGIN

set @command =’master..sp_dboption @dbname=”’ + @dbase +
”’,@optname=”Read ONLY”, @optvalue=”True”’

— Print command to be processed
print @command

— Process Command
exec (@command)

— Get next database
FETCH NEXT FROM UserDatabases INTO @DBase

END

— Close and Deallocate Cursor
CLOSE AllDatabases
DEALLOCATE AllDatabases

5 —-My favorite, lists job calender

USE MSDB

DECLARE

@JOB_ID VARCHAR(200),

@SCHED_ID VARCHAR(200),

@FREQ_TYPE INT,

@FREQ_INTERVAL INT,

@FREQ_SUBDAY_TYPE INT,

@FREQ_SUBDAY_INTERVAL INT,

@FREQ_RELATIVE_INTERVAL INT,

@FREQ_RECURRENCE_FACTOR INT,

@ACTIVE_START_DATE INT,

@SCHEDULE VARCHAR(1000),

@SCHEDULE_DAY VARCHAR(200),

@START_TIME VARCHAR(10),

@END_TIME VARCHAR(10)

CREATE TABLE #SCHEDULES

(JOB_ID VARCHAR(200),

SCHED_ID VARCHAR(200),

JOB_NAME SYSNAME,

[STATUS] INT,

SCHEDULED INT NULL,

schedule VARCHAR(1000) NULL,

FREQ_TYPE INT NULL,

FREQ_INTERVAL INT NULL,

FREQ_SUBDAY_TYPE INT NULL,

FREQ_SUBDAY_INTERVAL INT NULL,

FREQ_RELATIVE_INTERVAL INT NULL,

FREQ_RECURRENCE_FACTOR INT NULL,

ACTIVE_START_DATE INT NULL,

ACTIVE_END_DATE INT NULL,

ACTIVE_START_TIME INT NULL,

ACTIVE_END_TIME INT NULL,

DATE_CREATED DATETIME NULL)

INSERT INTO #SCHEDULES (

job_id,

sched_id ,

job_name ,

[status] ,

Scheduled ,

schedule ,

freq_type,

freq_interval,

freq_subday_type,

freq_subday_interval,

freq_relative_interval,

freq_recurrence_factor,

active_start_date,

active_end_date,

active_start_time,

active_end_time,

date_created)

SELECT

j.job_id,

sched.schedule_id,

j.name ,

j.enabled,

sched.enabled,

NULL,

sched.freq_type,

sched.freq_interval,

sched.freq_subday_type,

sched.freq_subday_interval,

sched.freq_relative_interval,

sched.freq_recurrence_factor,

sched.active_start_date,

sched.active_end_date,

sched.active_start_time,

sched.active_end_time,

j.date_created

FROM sysjobs j

inner join

sysjobschedules s

ON j.job_id=s.job_id

INNER JOIN dbo.sysschedules sched

ON s.schedule_id = sched.schedule_id

WHILE 1=1

BEGIN

SET @SCHEDULE = ”

IF (SELECT COUNT(*) FROM #SCHEDULES WHERE scheduled=1 and schedule is null) = 0

BREAK

ELSE

BEGIN

SELECT

@job_id=job_id,

@sched_id=sched_id,

@freq_type=freq_type,

@Freq_Interval=freq_interval,

@freq_subday_type=freq_subday_type,

@freq_subday_interval=freq_subday_interval,

@freq_relative_interval=freq_relative_interval,

@freq_recurrence_factor=freq_recurrence_factor,

@active_start_date = active_start_date,

@start_time =

CASE

WHEN

LEFT(active_start_time, 2) IN (22, 23) AND len(active_start_time) = 6

THEN

convert(varchar(2), left(active_start_time, 2) – 12)

+ ‘:’ + SUBSTRING(CAST(active_start_time AS CHAR),3, 2) + ‘ P.M’

WHEN

left(active_start_time, 2) = (12) AND len(active_start_time) = 6

THEN

cast(LEFT(active_start_time,2) as char(2))

+ ‘:’ + SUBSTRING(CAST(active_start_time AS CHAR),3, 2) + ‘ P.M.’

WHEN

left(active_start_time, 2) BETWEEN 13 AND 24 AND len(active_start_time) = 6

THEN

convert(varchar(2), left(active_start_time, 2) – 12)

+ ‘:’ + SUBSTRING(CAST(active_start_time AS CHAR),3, 2) + ‘ P.M.’

WHEN

left(active_start_time, 2) IN (10, 11) AND len(active_start_time) = 6

THEN

cast(LEFT(active_start_time,2) as char(2))

+ ‘:’ + SUBSTRING(CAST(active_start_time AS CHAR),3, 2) + ‘ A.M.’

WHEN

active_start_time = 0

THEN

’12:00 A.M.’

WHEN

LEN(active_start_time) = 4

THEN

’12:’ + convert(varchar(2), left(active_start_time, 2) ) + ‘ A.M.’

WHEN

LEN(active_start_time) = 3

THEN

’12:0′ + convert(varchar(2), left(active_start_time, 1) ) + ‘ A.M.’

WHEN

LEN(active_start_time) = 2

THEN

’12:00:’ + convert(varchar(2), left(active_start_time, 2) ) + ‘ A.M.’

WHEN

LEN(active_start_time) = 1

THEN

’12:00:0′ + convert(varchar(2), left(active_start_time, 2) ) + ‘ A.M.’

ELSE

cast(LEFT(active_start_time,1) as char(1))

+ ‘:’ + SUBSTRING(CAST(active_start_time AS CHAR),2, 2) + ‘ A.M.’

END,

@END_TIME= CASE

WHEN

left(active_end_time, 2) IN (22, 23) AND len(active_end_time) = 6

THEN

convert(varchar(2), left(active_end_time, 2) – 12)

+ ‘:’ + SUBSTRING(CAST(active_end_time AS CHAR),3, 2) + ‘ P.M’

WHEN

left(active_end_time, 2) = (12) AND len(active_end_time) = 6

THEN

cast(LEFT(active_end_time,2) as char(2))

+ ‘:’ + SUBSTRING(CAST(active_end_time AS CHAR),3, 2) + ‘ P.M.’

WHEN

left(active_end_time, 2) BETWEEN 13 AND 24 AND len(active_end_time) = 6

THEN

convert(varchar(2), left(active_end_time, 2) – 12)

+ ‘:’ + SUBSTRING(CAST(active_end_time AS CHAR),3, 2) + ‘ P.M.’

WHEN

left(active_end_time, 2) IN (10, 11) AND len(active_end_time) = 6

THEN

cast(LEFT(active_end_time,2) as char(2))

+ ‘:’ + SUBSTRING(CAST(active_end_time AS CHAR),3, 2) + ‘ A.M.’

WHEN

active_end_time = 0

THEN

’12:00 A.M.’

WHEN

LEN(active_end_time) = 4

THEN

’12:’ + convert(varchar(2), left(active_end_time, 2) ) + ‘ A.M.’

WHEN

LEN(active_end_time) = 3

THEN

’12:0′ + convert(varchar(2), left(active_end_time, 1) ) + ‘ A.M.’

WHEN

LEN(active_end_time) = 2

THEN

’12:00:’ + convert(varchar(2), left(active_end_time, 2) ) + ‘ A.M.’

WHEN

LEN(active_end_time) = 1

THEN

’12:00:0′ + convert(varchar(2), left(active_end_time, 2) ) + ‘ A.M.’

ELSE

cast(LEFT(active_end_time,1) as char(1))

+ ‘:’ + SUBSTRING(CAST(active_end_time AS CHAR),2, 2) + ‘ A.M.’

END

FROM #SCHEDULES

WHERE schedule is null

AND scheduled=1

IF EXISTS(SELECT @freq_type WHERE @freq_type in (1,64))

BEGIN

SELECT @SCHEDULE = CASE @freq_type

WHEN 1 THEN ‘Occurs Once, On ‘+cast(@active_start_date as varchar(8))+’, At ‘+@start_time

WHEN 64 THEN ‘Occurs When SQL Server Agent Starts’

END

END

ELSE

BEGIN

IF @freq_type=4

BEGIN

SELECT @SCHEDULE = ‘Occurs Every ‘+cast(@freq_interval as varchar(10))+’ Day(s)’

END

IF @freq_type=8

BEGIN

SELECT @SCHEDULE = ‘Occurs Every ‘+cast(@freq_recurrence_factor as varchar(3))+’ Week(s)’

SELECT @schedule_day=”

IF (SELECT (convert(int,(@freq_interval/1)) % 2)) = 1

select @schedule_day = @schedule_day+’Sun’

IF (SELECT (convert(int,(@freq_interval/2)) % 2)) = 1

select @schedule_day = @schedule_day+’Mon’

IF (SELECT (convert(int,(@freq_interval/4)) % 2)) = 1

select @schedule_day = @schedule_day+’Tue’

IF (SELECT (convert(int,(@freq_interval/8)) % 2)) = 1

select @schedule_day = @schedule_day+’Wed’

IF (SELECT (convert(int,(@freq_interval/16)) % 2)) = 1

select @schedule_day = @schedule_day+’Thu’

IF (SELECT (convert(int,(@freq_interval/32)) % 2)) = 1

select @schedule_day = @schedule_day+’Fri’

IF (SELECT (convert(int,(@freq_interval/64)) % 2)) = 1

select @schedule_day = @schedule_day+’Sat’

SELECT @SCHEDULE = @SCHEDULE+’, On ‘+@schedule_day

END

IF @freq_type=16

BEGIN

SELECT @SCHEDULE = ‘Occurs Every ‘+cast(@freq_recurrence_factor as varchar(3))+’ Month(s) on Day ‘+cast(@freq_interval as varchar(3))+’ of that Month’

END

IF @freq_type=32

BEGIN

SELECT @SCHEDULE = CASE @freq_relative_interval

WHEN 1 THEN ‘First’

WHEN 2 THEN ‘Second’

WHEN 4 THEN ‘Third’

WHEN 8 THEN ‘Fourth’

WHEN 16 THEN ‘Last’

ELSE ‘Not Applicable’

END

SELECT @SCHEDULE =

CASE @freq_interval

WHEN 1 THEN ‘Occurs Every ‘+@SCHEDULE+’ Sunday of the Month’

WHEN 2 THEN ‘Occurs Every ‘+@SCHEDULE+’ Monday of the Month’

WHEN 3 THEN ‘Occurs Every ‘+@SCHEDULE+’ Tueday of the Month’

WHEN 4 THEN ‘Occurs Every ‘+@SCHEDULE+’ Wednesday of the Month’

WHEN 5 THEN ‘Occurs Every ‘+@SCHEDULE+’ Thursday of the Month’

WHEN 6 THEN ‘Occurs Every ‘+@SCHEDULE+’ Friday of the Month’

WHEN 7 THEN ‘Occurs Every ‘+@SCHEDULE+’ Saturday of the Month’

WHEN 8 THEN ‘Occurs Every ‘+@SCHEDULE+’ Day of the Month’

WHEN 9 THEN ‘Occurs Every ‘+@SCHEDULE+’ Weekday of the Month’

WHEN 10 THEN ‘Occurs Every ‘+@SCHEDULE+’ Weekend Day of the Month’

END

END

SELECT @SCHEDULE =

CASE @freq_subday_type

WHEN 1 THEN @SCHEDULE+’, At ‘+@start_time

WHEN 2 THEN @SCHEDULE+’, every ‘+cast(@freq_subday_interval as varchar(3))+’ Second(s) Between ‘+@start_time+’ and ‘+@END_TIME

WHEN 4 THEN @SCHEDULE+’, every ‘+cast(@freq_subday_interval as varchar(3))+’ Minute(s) Between ‘+@start_time+’ and ‘+@END_TIME

WHEN 8 THEN @SCHEDULE+’, every ‘+cast(@freq_subday_interval as varchar(3))+’ Hour(s) Between ‘+@start_time+’ and ‘+@END_TIME

END

END

END

UPDATE #SCHEDULES

SET schedule=@SCHEDULE

WHERE job_id=@job_id

AND sched_id=@sched_Id

END

SELECT job_name ,

[status] = CASE STATUS

WHEN 1 THEN ‘ENABLED’

WHEN 0 THEN ‘DISABLED’

ELSE ‘ ‘

END,

scheduled= case scheduled

when 1 then ‘Yes’

when 0 then ‘No’

else ‘ ‘

end,

schedule as ‘Frequency’ ,

convert(datetime, convert(varchar,active_start_date, 101)) AS schedule_start_date,

convert(datetime, convert(varchar,active_end_date, 101)) AS schedule_end_date,

date_created

FROM #schedules

WHERE scheduled=1

ORDER BY job_name

DROP TABLE #schedules

I hope you all find something useful. And thank you to Allen White for hosting.