/**/

Tag: sqlserver

How an Oversized Numeric Precision Can Quietly Break Your Math

Do you ever worry about how you declare NUMERIC or DECIMAL data types in SQL?
Do you sometimes “add a bit more” precision—just to be safe?
Have you considered how that small decision could actually change your arithmetic results?

I ran into this recently when comparing data between two environments that should have produced identical results. One calculated field was slightly off — and the culprit turned out to be a difference in numeric data type declarations.

It’s essentially a rounding problem, but one that happens implicitly during calculations. That makes it harder to detect and can lead to significant discrepancies when aggregating results (think invoices or financial reporting).

Fig 1.: Same values, different results

The only difference was the declaration of the value on line 1 as DECIMAL(26,2) versus DECIMAL(28,2) on line 3.

Both store two decimal places — but result B allowed two extra digits of precision. That’s fine when storing or comparing raw values, but once those numbers are used in arithmetic operations, small differences can creep in. Summed over hundreds of thousands of rows, those differences become noticeable and potentially misleading.

Fig 2. More decimal points, more clarity

If we increase the number of visible decimal places, we can actually see the precision difference.

So, what’s happening between Result A and Result B?

Every time SQL performs an arithmetic operation, it must determine the data type of the result. The rules differ depending on the operation — addition, subtraction, multiplication, or division.

How SQL Server Determines Result Precision

🔗 Precision, scale, and length (Transact-SQL)

For division, SQL Server uses the following formula:

Precision = p1 - s1 + s2 + max(6, s1 + p2 + 1)

Scale = max(6, s1 + p2 + 1)
Fig 3.: Screenshot from documentation of the result datatype formula for division

In our case:

Fig 4.: Data type of the result
Precision = 28 - 2 + 4 + max(6, 2 + 15 + 1) = 48
Scale = max(6, 2 + 15 + 1) = 18

However, there’s a second step — the result precision cannot exceed 38.

If it does, SQL Server adjusts it using this rule:

Scale cannot be greater than 38 - (precision - scale)

So in this example:

38 - (48 - 18) = 8

This truncation explains the observed rounding difference between the two environments.

Takeaway

Be mindful when declaring your numeric precision and scale. “Just adding a few more digits” can cause unintended changes in how SQL Server performs arithmetic — and those small differences can scale up quickly in production data.

Here is a script if you want to test it or check it yourself:

DROP TABLE IF EXISTS res26, res28

select TOP 1 CAST(8002.48 as decimal(26,2))/ CAST(163.0000 as decimal(15,4))  as result
INTO dbo.res26
from sys.objects 
  
select TOP 1 CAST(8002.48 as decimal(28,2))/ CAST(163.0000 as decimal(15,4)) as result
INTO dbo.res28
from sys.objects 


select * from dbo.res26
SELECT schema_name(o.schema_id) as SchemaName,
       o.name,
       c.name as ColName,
       t.name as datatypeName,
       c.precision,
       c.scale
  FROM sys.objects     o
  JOIN sys.all_columns c ON o.object_id = c.object_id
  JOIN sys.types       t ON c.user_type_id = t.user_type_id
  where schema_name(o.schema_id) = 'dbo'
    and o.name = 'res26'

DECLARE @p1 tinyint = 26
       ,@s1 tinyint = 2
       ,@p2 tinyint = 15
       ,@s2 tinyint = 4
       ,@precision tinyint 
       ,@scale     tinyint

SELECT @precision =  @p1 - @s1 + 4 + GREATEST(6, @s1 + @p2 + 1),
       @scale = GREATEST(6, @s1 + @p2 +1)
   
SELECT 'starting precision' = '26',
       'precision' = @precision,
       'scale'     = @scale,
       'result precision' = IIF(@precision > 38, 38, @precision),
       'result scale' = LEAST(@scale, 38 - (@precision - @scale))

GO 

select * from dbo.res28

SELECT schema_name(o.schema_id) as SchemaName,
       o.name,
       c.name as ColName,
       t.name as datatypeName,
       c.precision,
       c.scale
  FROM sys.objects     o
  JOIN sys.all_columns c ON o.object_id = c.object_id
  JOIN sys.types       t ON c.user_type_id = t.user_type_id
  where schema_name(o.schema_id) = 'dbo'
    and o.name = 'res28'


DECLARE @p1 tinyint = 28
       ,@s1 tinyint = 2
       ,@p2 tinyint = 15
       ,@s2 tinyint = 4
       ,@precision tinyint 
       ,@scale     tinyint

SELECT @precision =  @p1 - @s1 + 4 + GREATEST(6, @s1 + @p2 + 1),
       @scale = GREATEST(6, @s1 + @p2 +1)
   
SELECT 'starting precision' = '26',
       'precision' = @precision,
       'scale'     = @scale,
       'result precision' = IIF(@precision > 38, 38, @precision),
       'result scale' = LEAST(@scale, 38 - (@precision - @scale))

AI / Chat GPT incorrect information

I was just wondering how does ChatGPT sometimes come up with information totally made up.
In this case on well documented SQL server DMV, which is documented and there must be more correct than incorrect information on the internet – used for the learning.

in this case this was the incorrect information I was given

sys.sql_expression_depencies DOES NOT have “is_missing” column obviously, would be cool if it did, but it doesn’t.

Documentation here;

Don’t get me wrong, I love ChatGPT and AI tools as little helpers, but makes me wonder how many people do trust it entirely and do copy paste type of work with that.

Extended events

This post about SQL server extended events was written based on the invitation 166 from scarydba.com. Still not sure why I haven’t noticed the invitation on the official web of tsqltuesday.com but found it by #tsql2sday

They have been around since SQL 2012, what has changed? Microsoft have done some enhancements to the impact on performance, they added new events, but what has not changed is the user unfriendliness. I feel this is the biggest downside of extended events, especially the need to process results from XML.

There has been 0 development in SSMS, but you have the option to use an extension for Azure data studio, which is handy and provides a similar interface to SQL profiler. Damn SQL profiler deprecated since when 2016? For how long it will be present? We know all the downsides of this tool, but wasn’t it handy at least once in our lives?

Btw. Azure data studio – why is majority DB users still using SSMS, is it a force of habit? Do they think it is only for Azure? Well, I do use SSMS more often, because it is available on instances where I am connecting, but sometimes I blame myself of being to lazy to drive the change. With this thought I found a nice article comparing features of ADS vs SSMS, feels like SSMS got a lot of point for features that are not used or at least not often used.

Another thing with extended events is that they are not always cheap and they can have an impact on the server. As a result rule no. 1 should be to set up your filters properly and be careful with predicated that can be evaluated after the event is actually collected, these won’t help to optimize collection performance.

Recently I came across a great library XESmartTarget that helps you read extended events externally and save data in a better structure than XML, e.g.: directly into DB table, there a lot of options that this library offers and we are going to use it for new features in Server Monitoring and Tuning tool (SMT).

Thank you for this topic and I look forward to reading posts of others.

© 2025 sqlkoala.com

Theme by Anders NorenUp ↑