Home » Archives » February 2004 » Foxpro and the 2GB Limit/STRENGTH

[ Previous entry: Screenshot.20041213 ]
[ Next entry: VISUAL FOXPRO 9 Released! ]
02/23/2004:

Foxpro and the 2GB Limit/STRENGTH


Craig Berntson explains about Visual Foxpro/Foxpro and its 2GB database limit.

Honestly when I first heard about programmers complaining about the 2GB limit, my first reaction was who in Earth's time would like a 2GB database?

That's HUGE. Too huge.

That would slow down things that even Foxpro's legendary Rushmore would have a hard time grinding.

Besides, there are tons of workaround for this limit.

Which I believe is not exactly a workaround per se, but a more effective means of solving the problem.

A better approach... a faster approach.

Better... that It should be the first choice instead of opting for the database to 2Giga-bloat that much in the future.

('Giga-bloat'... I like that.)

As one of my friends would say, Warcraft III's greatest strength is its unit limits. Unlike the first version of Warcraft, where you can train warriors and footmen, drag them all to the opponent's camp, find some soda or coffee while they're marching forward... and when you're back the enemy is leveled to the ground.

Sans the challenge...

But no... the 2GB limit can be seen not as a weakness... but rather a strength.

A well planned and normalized database will most likely prevent things from reaching that limit.

Memos and general fields or any objects that tends to bloat the database should be saved on a different location with only the path and the filename stored in the database.

No need to cramp all those jpegs and bitmaps into the database.

You can even link to external textfiles if you like instead of opting for the memo field in some cases. You can even separate the primary key and the memo field on a different normalized database if needed.

Of course a developer/programmer should think of the future... database files do grow.

Like a pineapple pie in the middle of the sacred forest... it grows.

But then, you can chop things... save records in tables created dynamically everyday.

You can even do it monthly... or store database separately by year... by month. That would even make things more organized. More compartmentalized.

You can easily create a 'fetching algorithm' that gathers only the necessary fields from the chopped databases let's say for a report... or a statistical view.

The solutions are endless...

Yet inspite of these solutions, you still find yourself where you would still prefer databases that can handle more than 2GB in size, and chopping just won't do... or normalizing, or calling the thundergods of database compression... there's always MSSQL, MySQL, FireBird and the likes.

Pardon the disorganized thoughts... it is 3:33AM already, I can't think well and I can't find a way to knock myself down to sleep.


Disclaimers are for castrated EARTHLINGS.
Powered: GREYMatter | GM-RSS

 

 
 
 
 

 

foxpro.main
foxpro.archives
richardbase.home

articles
downloads
snippets
utilities
knowledgebase.links
website.links

outpost.forum
the.site
the.catalyst
pixelcatalyst.lair

rss.feeds

February 2004
SMTWTFS
   1234
567891011
12131415161718
19202122232425
26272829   
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004





GEEK count:
visitors since the aliens rebooted the counter last 02.23.2006 (was around 33,000++ before the alien intrusion | SINCE: 02.26.2004)