Foxpro and the 2GB Limit/STRENGTH

Filed Under (Visual FoxPro, work.BLOG) by WildFire on 16-12-2004

Craig Berntson explains about Visual Foxpro/Foxpro and its 2GB database limit.

Honestly when I first heard about programmers complaining about the 2GB limit, my first reaction was who in Earth's time would like a 2GB database?

That's HUGE. Too huge.

That would slow down things that even Foxpro's legendary Rushmore would have a hard time grinding.

Besides, there are tons of workaround for this limit.

Which I believe is not exactly a workaround per se, but a more effective means of solving the problem.

A better approach... a faster approach.

Better... that It should be the first choice instead of opting for the database to 2Giga-bloat that much in the future.

('Giga-bloat'... I like that.)

As one of my friends would say, Warcraft III's greatest strength is its unit limits. Unlike the first version of Warcraft, where you can train warriors and footmen, drag them all to the opponent's camp, find some soda or coffee while they're marching forward... and when you're back the enemy is leveled to the ground.

Sans the challenge...

But no... the 2GB limit can be seen not as a weakness... but rather a strength.

A well planned and normalized database will most likely prevent things from reaching that limit.

Memos and general fields or any objects that tends to bloat the database should be saved on a different location with only the path and the filename stored in the database.

No need to cramp all those jpegs and bitmaps into the database.

You can even link to external textfiles if you like instead of opting for the memo field in some cases. You can even separate the primary key and the memo field on a different normalized database if needed.

Of course a developer/programmer should think of the future... database files do grow.

Like a pineapple pie in the middle of the sacred forest... it grows.

But then, you can chop things... save records in tables created dynamically everyday.

You can even do it monthly... or store database separately by year... by month. That would even make things more organized. More compartmentalized.

You can easily create a 'fetching algorithm' that gathers only the necessary fields from the chopped databases let's say for a report... or a statistical view.

The solutions are endless...

Yet inspite of these solutions, you still find yourself where you would still prefer databases that can handle more than 2GB in size, and chopping just won't do... or normalizing, or calling the thundergods of database compression... there's always MSSQL, MySQL, FireBird and the likes.

Pardon the disorganized thoughts... it is 3:33AM already, I can't think well and I can't find a way to knock myself down to sleep.