Theoretical numeric data minimization

I have been working on a program for about a year now that encodes and decodes text files. The problem is that encoded text files are far larger than their originals. I want them to be smaller. There is an intermediate step in the converting algorithm that changes the letters into ASCII code, which is where the numbers come in. I was wondering if there was a way to reduce the amount of letters by combining them in some way while keeping the original text.

A simpler way to put it is:

I have two numbers that range from 0-255. I need to combine them into one number between 0 and 225, or I need to combine them into two smaller numbers who's total length (character-wise) is not greater than that of the original two. The catch is, I also need to get them back in their original form, so simply subtracting the smaller from the larger won't work. Also, all of the numbers must be integers, so division will probably not be very practical.

I've been trying to figure out an answer to this problem for a while, and I don't even know if there is one.

Thanks,

Xenone

Re: Theoretical numeric data minimization

Sounds like you want a data compression algorithm. There are many algorithms to accomplish this: have a look at Data compression - Wikipedia, the free encyclopedia