- Details
- Written by: Glenda Gable
- Category: Configuration/ Automation
This month's T-SQL Tuesday is being hosted by Garry Bargsley (blog | twitter) - Automate All The Things. Since everyone's environment and experiences are different, he asks, "what does “Automate All the Things” mean to you?".
I love automating things. The thing you lose with automation is a human looking at something in order to figure out whether what is there is within guidelines or not. With most companies, those guidelines aren't always spelled out well. Therefore, it needs to be a thought out plan not just something that gets done quickly and not thought of again after.
For me, I think about automation as a project or process each time. I learned the hard way that I want a lot of error trappings and alerts throughout. For that reason, when I automate a specific process, I do so in multiple phases: 1) the base code that is needed is run, 2) error trapping is a lot more defined, 3) notifications to end-users are sent at appropriate times, and 4) self-recovery happens to help remediate errors not just report them.
- Details
- Written by: Glenda Gable
- Category: Security
All of the companies I have worked for have individual AD accounts with permissions in SQL. They have all wanted to keep things easier by using AD security groups, but there are various reasons why that isnt always the reality, such as: software limitations, linked server config, legacy needs, changes impacting critical processes, etc..
I was recently introduced to the xp_logininfo stored procedure. It is a very handy tool to use to gather information about an AD account, or for user groups. Whenever someone asks for access, it is my go-to to find all of the groups they belong to which already have some permissions. I still check AD to see if there are more appropriate groups (based on who else is in a group), but I thought I would share how this stored proc is helpful.
- xp_logininfo 'ADaccountName', 'all'
By including the AD account name, you can get a list of all of the AD security groups that user belongs to, which have permissions on that particular instance being queried. That last part is important. It isnt a list of all of the network groups that user belongs to - it is only those that already exist on that instance of sql. For example, if my account belongs to GroupA, GroupB and GroupC; the sql instance I am querying already has GroupB with permissions; then that will be the only result returned even though my account belongs to all 3 groups in AD. - xp_logininfo 'ADsecurityGroupName', 'members'
By including the AD security group name, you can get a list of all of the AD members who belong in that group. This is an easy way of querying to get a list of everyone who has access.
- Details
- Written by: Glenda Gable
- Category: Configuration/ Automation
This month's T-SQL Tuesday is being hosted by Bert Wagner (blog | twitter) - Code You Would Hate To Live Without. He wants you to share about the code that you would hate to live without, whether yours or someone else's.
I know there will be a lot of wonderful posts about this topic! I have 2 things that stand out as scripts I love.
- I changed a stored procedure which used a cursor and ran for 45 mins, into a set-based query that runs in less than 5 mins. While I know I am stretching the topic just a little, it is something I think about often, very proudly. This was something that helped cement the idea that I was on the right path in my career. I knew I loved working with data and databases, but I remember the level of excitement I had when I got it to work. :)
- The other script is something I put together which works with log shipping for multiple databases. I made a robust, automated process from start to finish. There are still improvements I would make to it, but I dont work for that company anymore, so I probably wont have a chance in production for a while. The vendor had multiple databases (approx 45) and that number would continue to grow as we added more businesses. Therefore, the solution had to be dynamic, nimble and easy to maintain. I am proud of the documentation (ohhh... I said a bad word), workflows and full process with error handling, I set up. It was also easy to troubleshoot and only run one or a few databases at a time if only some were lagging behind.
While neither script is a general purpose script (they are specific to the company I worked for at the time), they both make me feel confident in my abilities as a DBA. I know that I can keep at something to make it eventually bend to my will (muwhahha), and enjoy myself along the way.
- Details
- Written by: Glenda Gable
- Category: Performance
This article talks about how to stop or terminate a process that is running. Stopping a process shouldn't be the first step when troubleshooting as it can possibly make the situation worse. However, it can sometimes be helpful in clearing blocking chains or other performance issues.
In order to know what to stop, you will need to know what processes or queries are running. There are a variety of ways to see all of the processes or queries running right now, including the following:
- Microsoft built in tools - sp_who and sp_who2
- Click here for Kendra's explanation on sp_who2
- Adam Mechanic's tool - Who is Active (Click here to download it)
- Click here for Adam's documentation
- Click here for Brent Ozar's 5 min video tutorial video
- Hand made query using the sys.dm_exec_requests and/or the sys.dm_exec_sessions tables
KILL
Once you know the session ID (spid) of the query or process you want to stop, you can do so by using the KILL command. The kill command is pretty straight forward. The syntax is simply: KILL spid (replace the text "spid" with the number of the process desired to be stopped)
A word of caution here, there are times when it is best not to kill a session. One example is if a process has been running for a while and using multiple threads, when it is killed, the rollback operation will only be single threaded. Therefore, the rollback could take much longer than letting the process finish on its own. All of the same performance issues and/or blocking will still continue, so it actually could things worse, not better.
Some processes are not able to be stopped: per Microsoft, "System processes and processes running an extended stored procedure cannot be terminated."
UNIT OF WORK ID
The kill command can also be used to stop or terminate orphaned distributed transactions. The GUID value that is the Unit of Work ID (UOW ID) can be found in either the error log, the MS DTC monitor, or the sys.dm_tran_locks dmv.
The syntax is simply: KILL uowid
WITH STATUSONLY
While I knew about the kill command, I wasnt aware that you are able to check the status of the rollback process with the WITH STATUSONLY option until semi-recently.
The syntax is simple KILL spid WITH STATUSONLY - or - KILL uowid WITH STATUSONLY
For processes that are in the middle of being rolled back, this will show the status of the rollback. This option doesn't "do" anything, it just shows the progress. Keep in mind that as with other Microsoft provided durations, it isn't an exact number. This is more of a ball park figure that might be semi-right, but may also be a bit off from the actual time.
If you run this option on a spid that isnt part of a rollback, it wont hurt anything. You will just see an error returned indicating a rollback isnt in progress.
Its funny, but whenever I have to kill a process (which is thankfully rare), I cant help but do a bad-guy cackle in my head... lol!
- Details
- Written by: Glenda Gable
- Category: DBA Info
This month's T-SQL Tuesday is being hosted by Adam Machanic (blog | twitter) - Looking Forward 100 Months. He aims to stretch our thinking and calls us to think about what is to come.
I decided to have a little fun with this one (after all Adam bolded that part of the sentence didn't he?). When I read the invitation, and Adam mentioned some people might want to go slightly more science fiction than science, my brain exploded with excitement! Then, I re-read it and picked up on that it was supposed to be a post that might be given in 2026 (8 years or so from now) - I kind of popped my own bubble. I couldn't talk about invisible implants created to give us an edge, or everyone having a pocket assistant robot, etc. One idea, though, still seemed possible to me.
Part of our job in IT involves working with other people. Some of us work with other DBAs, others work with infrastructure folks, some end users, management, etc. Some of the people we talk to are technical, and some arent; some know what they are talking about, and yet others just think they do. One of the biggest things I find absolutely wonderful about our sql family is that the intention of most of us is honest and genuine, and that we are interested in helping others and doing our job well.
Many times, I will fall into a trap (that I lay myself I am sure) where my genuine intent and/or sincerity is not taken into account. Due to office politics, I may be viewed in a negative way, such as come across from a defensive posture, intimidating, or as a threat. My intentions in most cases are pure and I just want to be able to help solve the overall problem in the best way I know how. When some meetings are over, emails starting to fly around, I end up in a situation where someone is vying for more power, has a different idea, or doesn't agree, etc., and next thing you know I am seen as ___(fill in the blank)___ (the bad guy/ not a team player/ a bottleneck).
With the improvements of Siri/Galaxy/Alexa, and devices that can do fairly specifc things such as being able to tell why a baby is crying, my thought is, that at some point in time, I would love to see a device invented which can help read the intentions of others. Many people can read human "signs", such as being able to spot a lie. There are even professions which rely heavily on reading people, such as criminal profilers, or those who play poker much better than me. We are definitely putting enough science into the psychology of human behaviour and such to make it possible to understand the motivations behind actions.
The optimist in me sees this as a wonderful tool to help bring people together and solve the world's problems. I can see where people who are shy or who are interested in working with another group but are hesitant, would be able to trust each other more with such a device! Imagine some of the solutions that could be devised when people from different cultures or walks of life, could actually work together instead of constantly distrust each other.
My thoughts drifted back down from cloud 9 to the relevance in my job, and I could see teams of people working together to come up with incredible solutions to business processes. A device such as this, would help cut through office politics (SOME) and help identify people who are sincere in their passion/dedication to solving issues. While there would still be discussions which from the outside look like shouting matches, because multiple passionate people in the same conversation invokes "interesting" debates, the difference would be, when they walk out of the room, they understand that their ideas were being bounced around by others who are just as invested and genuine as they are.
Then, I started thinking about all the data that this device could produce in order to analyze. Imagine the dashboards and reports that could be built to show how someone grows as a person throughout the interactions they have. By interacting with people with more of a positive nature, we gravitate towards more creativity and productivity. Many psychologists "know" these things, but we would have more data to help prove it. Of course, I would be remiss if I didn't point out that all of this data could be used to make robotics more human-like as well. By having a deep understanding, and lots of data to back it up, we could end up with robots that are hard to distinguish from humans. Although, to be honest, if you want any sci-fi movies and such, it is already on the table. Therefore, by the time this device rolls out, it is already in place anyway. :)
Yes, the realist in me understands how every device can be learned to be "fooled" or tricked. I realize that a single device isnt going to turn some situations into rainbows and lollipops. I also realize that some people would use this as a way of targeting easier prey and such. In this blog post, though, I am wanting to focus on the positive potential (and forget the bad stuff since I am not in the high-tech engineering field and dont have to worry about creating it myself).
All in all, I know that there are ways of reading people now, and there are skills that I can learn to help out in many situations. I also know how much learning would go into that and want something to help in a much more immediate future. :) I realize I kind of went off the deep end and took the idea to "have a bit more fun" in a way that probably wasnt exactly what Adam intended, but thats the beauty of this - its my blog! lol :)
- Details
- Written by: Glenda Gable
- Category: Configuration/ Automation
This month's T-SQL Tuesday is being hosted by Arun Sirpal (blog) - Your Technical Challenges Conquered. I love how Arun wants to jump in at the start of the new year and talk about something we have conquered... it really sets the tone for the year!!
While this isn't the most challenging thing I have ever done, it was (and still is) something I am proud of. For my post, I am writing about a scenario that stretched my boundaries on planning things out well in order to come up with a robust solution.
My company provides visits to patients in their home, hospice care, or long term care in an acute care setting. We have multiple facilities throughout the USA. The vendor of one of our EMR systems creates a separate database for each facility. Therefore, the vendor currently has 40+ databases for our company. For reporting and analysis needs, we needed to bring a copy of the data into our data center. Of course when the business asks for something, they want it right away. The easiest and quickest way was to set up log shipping. Discovery led us to realize that the end result would be used by the etl team for importing into the data warehouse, the reporting team for ad-hoc reporting, and the application development for various custom built in-house applications. Therefore, on top of the log shipping, I also needed to make things easy for them to pull the data in a consistent way, without them having to keep up with new agencies and change connection strings, etc.
When thinking about the project as a whole, I knew I had the following requirements:
- agencies would be added as we transition them from another software to this EMR
- agencies would be added as we acquire new companies
- vendor folder structure which held the backup files was a root folder and every database had its own sub-folder
- handle issues that arise for one or multiple databases without interfering with the other database restore processes
- report on the "up-to-date" date for each database when users ask (there will always be someone who doubts the data is up to date because they dont see something they expected)
- combine/aggregate all data into a single database so that data can be queried in a consistent manner
- encryption process needs to be included in order to have the files encrypted in transit (even though using SFTP) and at rest
After planning, then polishing, then throwing that away, then planning some more, then polishing some more, etc., my end result is something that flows fairly well. I have a table which lists all of the databases, the folder where the log backups are, a flag to indicate "PerformLogBackups" and a flag to indicate "AggregateData". I have a decent sized script which queries a list of all of the databases which need to be updated, copies the encrypted file off the SFTP server onto a file share (just in case...), and then moves the file to the SQL Server local drive. Then, it decrypts all of the files, and loops through each database and restores each file in order. Once all of the restores are completed, another job runs which aggregates the data into a single database for reporting needs. Lastly, it cleans up the files and sends appropriate notifications.
The table which holds the list of databases solves the requirement of new agencies added easily. It also solves the scenario if issues arise with some, not all databases, by toggling the flags appropriately. If a certain database is having an issue, toggling the flag will stop it from being restored, but others will continue as normal. Easy peezy... lemon squeezy...
I have checkpoints in the process, which will be covered as a topic in an upcoming post, and presentation. Also, I still need to do more, just to make it even more well-rounded and hardy, but that is another reason I am writing about it today - it is still ongoing! I love the idea of having something that works but that I can tweak and polish to make it better and better. After all... is anything ever really done? :)
- Details
- Written by: Glenda Gable
- Category: Backups and Restores
The company I am working for has a copy of a vendor database on-prem for reporting and analysis. The database is over 4 TB at this time, and constantly growing due to acquisitions and continued patient care.
Currently, our process for updating the data is log shipping, a tried and true method which has been successfully updating the database for years now. Requirements are ever-changing, and there is a desire to have data that is updated more often during the day. This article describes the current process, and explains how Availability Groups can be used to fulfill the requests.
Click here for a pdf that shows the current process, and both new processes, as explained in detail below.