/**/

Category: Uncategorised

How an Oversized Numeric Precision Can Quietly Break Your Math

Do you ever worry about how you declare NUMERIC or DECIMAL data types in SQL?
Do you sometimes “add a bit more” precision—just to be safe?
Have you considered how that small decision could actually change your arithmetic results?

I ran into this recently when comparing data between two environments that should have produced identical results. One calculated field was slightly off — and the culprit turned out to be a difference in numeric data type declarations.

It’s essentially a rounding problem, but one that happens implicitly during calculations. That makes it harder to detect and can lead to significant discrepancies when aggregating results (think invoices or financial reporting).

Fig 1.: Same values, different results

The only difference was the declaration of the value on line 1 as DECIMAL(26,2) versus DECIMAL(28,2) on line 3.

Both store two decimal places — but result B allowed two extra digits of precision. That’s fine when storing or comparing raw values, but once those numbers are used in arithmetic operations, small differences can creep in. Summed over hundreds of thousands of rows, those differences become noticeable and potentially misleading.

Fig 2. More decimal points, more clarity

If we increase the number of visible decimal places, we can actually see the precision difference.

So, what’s happening between Result A and Result B?

Every time SQL performs an arithmetic operation, it must determine the data type of the result. The rules differ depending on the operation — addition, subtraction, multiplication, or division.

How SQL Server Determines Result Precision

🔗 Precision, scale, and length (Transact-SQL)

For division, SQL Server uses the following formula:

Precision = p1 - s1 + s2 + max(6, s1 + p2 + 1)

Scale = max(6, s1 + p2 + 1)
Fig 3.: Screenshot from documentation of the result datatype formula for division

In our case:

Fig 4.: Data type of the result
Precision = 28 - 2 + 4 + max(6, 2 + 15 + 1) = 48
Scale = max(6, 2 + 15 + 1) = 18

However, there’s a second step — the result precision cannot exceed 38.

If it does, SQL Server adjusts it using this rule:

Scale cannot be greater than 38 - (precision - scale)

So in this example:

38 - (48 - 18) = 8

This truncation explains the observed rounding difference between the two environments.

Takeaway

Be mindful when declaring your numeric precision and scale. “Just adding a few more digits” can cause unintended changes in how SQL Server performs arithmetic — and those small differences can scale up quickly in production data.

Here is a script if you want to test it or check it yourself:

DROP TABLE IF EXISTS res26, res28

select TOP 1 CAST(8002.48 as decimal(26,2))/ CAST(163.0000 as decimal(15,4))  as result
INTO dbo.res26
from sys.objects 
  
select TOP 1 CAST(8002.48 as decimal(28,2))/ CAST(163.0000 as decimal(15,4)) as result
INTO dbo.res28
from sys.objects 


select * from dbo.res26
SELECT schema_name(o.schema_id) as SchemaName,
       o.name,
       c.name as ColName,
       t.name as datatypeName,
       c.precision,
       c.scale
  FROM sys.objects     o
  JOIN sys.all_columns c ON o.object_id = c.object_id
  JOIN sys.types       t ON c.user_type_id = t.user_type_id
  where schema_name(o.schema_id) = 'dbo'
    and o.name = 'res26'

DECLARE @p1 tinyint = 26
       ,@s1 tinyint = 2
       ,@p2 tinyint = 15
       ,@s2 tinyint = 4
       ,@precision tinyint 
       ,@scale     tinyint

SELECT @precision =  @p1 - @s1 + 4 + GREATEST(6, @s1 + @p2 + 1),
       @scale = GREATEST(6, @s1 + @p2 +1)
   
SELECT 'starting precision' = '26',
       'precision' = @precision,
       'scale'     = @scale,
       'result precision' = IIF(@precision > 38, 38, @precision),
       'result scale' = LEAST(@scale, 38 - (@precision - @scale))

GO 

select * from dbo.res28

SELECT schema_name(o.schema_id) as SchemaName,
       o.name,
       c.name as ColName,
       t.name as datatypeName,
       c.precision,
       c.scale
  FROM sys.objects     o
  JOIN sys.all_columns c ON o.object_id = c.object_id
  JOIN sys.types       t ON c.user_type_id = t.user_type_id
  where schema_name(o.schema_id) = 'dbo'
    and o.name = 'res28'


DECLARE @p1 tinyint = 28
       ,@s1 tinyint = 2
       ,@p2 tinyint = 15
       ,@s2 tinyint = 4
       ,@precision tinyint 
       ,@scale     tinyint

SELECT @precision =  @p1 - @s1 + 4 + GREATEST(6, @s1 + @p2 + 1),
       @scale = GREATEST(6, @s1 + @p2 +1)
   
SELECT 'starting precision' = '26',
       'precision' = @precision,
       'scale'     = @scale,
       'result precision' = IIF(@precision > 38, 38, @precision),
       'result scale' = LEAST(@scale, 38 - (@precision - @scale))

AI / Chat GPT incorrect information

I was just wondering how does ChatGPT sometimes come up with information totally made up.
In this case on well documented SQL server DMV, which is documented and there must be more correct than incorrect information on the internet – used for the learning.

in this case this was the incorrect information I was given

sys.sql_expression_depencies DOES NOT have “is_missing” column obviously, would be cool if it did, but it doesn’t.

Documentation here;

Don’t get me wrong, I love ChatGPT and AI tools as little helpers, but makes me wonder how many people do trust it entirely and do copy paste type of work with that.

How to protect your Intellectual Property in SQL server



This is a blogpost inspired by T-SQL Tuesday invitation #167 by MATTHEW MCGIFFEN.

Imagine a scenario where you need to deploy your application, which contains a significant amount of valuable know-how, to a SQL server that is not under your control or is managed by other DBAs. In most cases, you are deploying your application to a customer, and naturally, you want to protect your intellectual property (IP). Is there a way to achieve this? The short answer is no, there isn’t. While there may be some security measures in place, I firmly believe that there is no foolproof method to safeguard data from users with sysadmin access and server admin access. Just a note, Always Encrypted was not a solution in our case, although considered.

So, how did I approach this problem? I decided to add additional layers of security to minimize the risk of unauthorized access to sensitive data.

On the SQL server itself, I created a symmetric key using a password to encrypt the data. The next question was where to store the password required to open the symmetric key. One obvious option was to embed it in a stored procedure and encrypt the stored procedure itself. Even sysadmins do not have the ability to alter encrypted stored procedures from SQL Server Management Studio (SSMS). However, they can potentially use free third-party tools to decrypt the stored procedure and retrieve the original code. This taught me that stored procedure encryption, while helpful, is not entirely foolproof.

Since my application relies on SQL data, another option was to embed the data directly into the application and compile it. This might seem like a bulletproof solution, as the code would be in binary form, making it unreadable. However, in reality, it’s relatively easy to decompile, for instance, C# code using free tools. Reverse engineering has become more accessible than it used to be.

In the end, the level of security you can achieve depends on various factors. If possible, dividing your solution and not centralizing everything can be beneficial. While you might encounter a curious DBA, they typically lack the development experience required to easily access your application code. You can also consider obtaining sensitive data from external resources and storing them only in the memory of the running application or in temporary objects within the SQL server. However, as mentioned earlier, it’s challenging to guarantee that someone won’t hijack your process and access and save the data elsewhere. Ultimately, the level of protection you need depends on the value of your data and whether others can make use of it without understanding its context.

© 2025 sqlkoala.com

Theme by Anders NorenUp ↑