Overclocking... Must go faster!

Heya, I'm new to the overclocking game, but think I have all the right gear... anyway here's my system specs;
Intel Pentium 3.0Ghz 800FSB
Zalman CNPS7000A-Cu
Zalman ZM-F1 80mm
2x 512MB MB PC3200 DDR
MSI 875P Neo Motherboard
MSI 52x24x52x CD-RW
Samsung 20 Gigabyte HDD
Western Digital 80 Gigabyte HDD
MSI Geforce Ti4200MX (4x AGP) 128MB
Now I've got some decent cooling in there, notched up to full RPM... still need to invest in a Northbridge fanless cooler though.
Basiclly can anyone suggest some BIOS/voltage FSB clock speeds that you reckon my system can handle. I'd preffer to do it all within the BIOS if possible, I had a little play around with CoreCentre but that gets upto FSB 220Mhz then freaks out... surely I can go faster!?
Anyway hope someone out there can help!
Andy_ABC1 ([email protected])

Yes, And the First and Best Advice that I will and Can give you is A) Dont use "Core-Center" to overclock , as this Utility is still a project in the Works, so I would Use the BIOS for any OCing, But if you Must use a Software Based Overclocking Utility, then Please use "Clock-Generator" http://www.cpuid.com  ....As this is the Best FSB Management Program out there , And the One thing that you Must remember to do if you Overclock the FSB is to Set (Lock) your AGP/PCI Buses, to as close to their Default Frequencies as Possible 66/33, or you will Most definately experience Problems accoss the whole Board ..ie Your Graphics Card will act up, your onboard Sound, and your IDE Channels may start giving you Data errors....Wow that was alot for just the letter A
And for B) The Most important thing to remember, is that you Must Go Slow!!, If you rush, you will not have a very Successful Overclocking Experience...Set your FSB to 220, and Leave you V-CORE set to Default (1.550 V), and you should NOT have to Up your CPU Voltage until you Approach 240 or 250 MHZ, as these New "C" P-4's rarely require a Voltage Boost, now the Next thing you are going to have to do, is to set your Memory Timings, I would start out at SPD, and As Long as you Keep your FSB OCed to a Minumum (Under 225 MHZ) you should be fine with your Memory staying at a 1:1  RATIO, if you go much Higher you will eventually have to Lower your Memory Ratio in Order to Keep it withen it's Speed Rating.... And after you Up it to 220 Leave it there for a Day or More, Play games, or Loop some Benchmarks for a good Hour at a Time, just to see where your CPU Temps are, and how long it takes them to return to Normal and it would be a good Idea to go into the MSI P-4 and Celeron Forum, to see what other Members have their Systems Set to, and dont Forget...Go Slow, And a Few More MHZ is NOT worth Ruining your CPU or Motherboard...................Sean REILLY875

Similar Messages

  • Neo boards: Chipset Overclocking must stop.

    Hello,
    Well seems that good ol' Tom is taking a stand now.
    Read:
    Quote
    So which board comes out on top in the price/performance ratio? First, MSI and AOpen should stop overclocking their boards' chipsets. We also think the AOpen is overpriced for an 845PE board. Both of these models fail to gain our recommendation.
    Well, it is getting interesting. With the new BIOS only days away, I'll ask myself (and MSI) if we will finally get a non-overclocked and stable board.
    I hope MSI listens to their customers and reviewers and decides to do something about this issue.
    Thanks,
    Arthur.

    Dear [Maesus],
    I have tried to duplicate the issue, I saw it only once when I reflash the BIOS and then change the CPU clock setting.
    My procedure as below…
    1.      Boot system
    2.      Update BIOS to 1.9 version
    3.      Restart the system to BIOS setting
    4.      Change the CPU Bus Clock to 229
    5.      Save the setting & exit and then the system restart automatically
    6.      And then I saw the CPU is only running 1.86GHz (my CPU is 3.2GHZ FSB 800MHz)
    But I would like to explain that my procedure as above is incorrect.
    The correct procedure should be as below….
    1.      Boot system
    2.      Update BIOS to 1.9 version
    3.      Restart the system to BIOS setting
    4.      Load BIOS setup default first
    5.      Change the CPU Bus Clock to 229
    6.      Save the setting & exit and then the system restart automatically
    7.      And then I saw the CPU is running 3.66GHz (my CPU is 3.2GHZ FSB 800MHz)
    Note: To load BIOS setup default setting after reflash BIOS that is because all original default setting will be erase after upgrade BIOS. So we always recommend customers to load BIOS default setting first before change any BIOS setting.
    BTW, we have updated the BIOS version to support Prescott CPU. Users can decide to update it or not.
    The new release BIOS as attach file.
    Best regard!
    [MSI Staff]

  • New PC - must be faster?

    Hi all,
    I just bought a new pc with the following specs:
    Intelcore i7 3820 - 3.6ghz
    32 gb ram
    64 bit windows
    NVIDIA Geforce GTX 680
    After Effects CS6 - 11.0.2.12
    Then i configured the preferences in After Effects like the advise given in:
    http://blogs.adobe.com/toddkopriva/2012/05/gpu-cuda-opengl-features-in-after-effects-cs6.h tml
    http://blogs.adobe.com/toddkopriva/2011/02/optimizing-for-performance-adobe-premiere-pro-a nd-after-effects.html
    But still, the rendering feels very slow, not what I expected wenn buying a 2000 dollar workstation.
    Settings:
    Is there a reliable benchmark project, wich will give me hard statiscts of my setup? Thanks!

    In the given example there seems barely any footage I/O, so it does not at all have a significant influence on rendering times. and even if there are lots of footage, it may not at all matter where it's stored if there is equalyl much processing. Same as always: Depends on teh specific project. SSDs for video are only useful if you can play multiple streams in realtime and any processing can just as well happen in realtime, meaning it may be nice for color corrections or simple edits, but not for complex compositing stuff, least of all when plug-ins are involved that impose their own limitations.
    Mylenium

  • Kt3 ultra and 9600xt fast write

    Amd Barton 2500
    dr. thermal ti-77n
    Kt3 Ultra standard mobo w/5.7bios
    Antec performance II sx835II case w/350 psu
    Corsair 512 2100 x3
    Ati 9600xt
    Audigy ZS gamer
    as some of u may be aware the 9600 family of cards are very picky about fast writes. In order to run games without lock ups/crashes i must enable fast writes. The problem is that when i do, i get corrupt text, lines around my cursors, video lag etc etc. I recently switch to a barton 2500 that is running a 1.87ghz at 133fsb, before i ran an 1800 that did not have any of these problems with fast write enabled. bios and xp recognixed it as a 2500+ and i have the ratio set to auto and memory set to HCLK. I tried varios drivers for the gfx card and all produced the same results. Also tried all combinations of bios fast write on gfx control panel off, on etc. any ideas?
    TIA
    -leo
    pic of lines/box around cursor

    fast write disabled = game willl lock, the 9600 family of cards are notorious for this, i was able to fix my corrupt text and cursor problem by moving the slider under troubleshooting in the ati panel down a notch. in game im still seeing some problems though, hopefully the game specific forums can give me some help. thiis is really annoying because the dam game ran fine with my 1800 i went to the 5.09 drivers on the reccomendation from another forumite.
    screeny to show in game problem for those that care
    www.theoldschoolers.com/Forums/uploads/post-11-1134154170.jpg
    anyways this is more specific game related matter now, but i figured id post up a follow up for u guys anyways

  • Must startup clock equal configuration clock?

    Hi,
    I have a Virtex-6 configured from platform flash in selectMap slave (external clock), configuration clock provided to both by an on-board clock generator (40Mhz, must be fast for PCIE <100ms config time).
    Often, the FPGA done bit goes high (external LED...) , everything is OK (read status register via IMPACT)  except :
    [5] GTS_CFG_B STATUS : 0
    Signal integrity on CCLK is OK (checked with scope).
    Bit file generated using default options.
    A local AFE suggested setting the StartupClk option to UserClk.
    Question: must the start-up clock be the same as the configuration clock? why?
    could this explain the intermittent failures I'm seeing?
    thanks in advance, Gal.

    Since you are doing the configuration in slave mode, the CCLK will not be generated by the config logic.

  • Does compiling the kernel makes the system run significantly faster?

    It used to be when you compiled your kernel (at least ten years ago), you would get a significant speed boost. However, with the speed of machine nowadays, I wonder how much, if any. Does compiling the kernel have a significant increase in daily performance or otherwise? If so, what portion of the system does it makes faster than before? In short, Is it worth it? I appreciate any comments.

    lilsirecho wrote:
    Broch:
    The title of this thread doesn't limit responses to just redoing the kernel since it says "system run significantly faster".
    The kernel is not the system...the system runs from HDD and ram. 
    To speed up that process, run everything possible in ram for system speed up.
    I think for 500 packages loading from scratch to desktop in 45 seconds is fast.  And running after that in ram is super fast system speed "significantly faster".
    The speed I quote is possible now.  With UDMA capability in the kernel, at least half of the install time is probable.
    And that is for over 500 packages into KDE no less!
    Using flash in IDE removes the latent search from the system and therefore must be faster.
    Changing the kernel will indeed speed up the system...but only if it includes UDMA for the flash drives already available but not configured with the present kernels. 
    Using flash drives for ide cache repos will allow faster boot time, and reduce the load on the system.  You don't normally run many,many programs at once so put them in cache with a script to permit fast loading .
    That is "system running significantly faster" and all in ram, the fastest in the computer.
    no
    Does compiling the kernel makes the system run significantly faster?
    compiling kernel, not whatever
    I did not suspect that this requires explanation?
    but if you really want the noise, let's leave "faster?"
    and answer is: yes/no
    whatever that means.
    or you are joking
    anyway seems that results may vary.
    Last edited by broch (2008-02-06 04:32:01)

  • How can I disable the double-tap to zoom in function?

    I play a game where I must click fast on a button, but tapping the button zooms in/out the screen all the time.
    I also don't need to zoom in. Thus, how can I disable the tap to zoom function?

    The double tap focuses on elements on the page and to zoom pinching in and out are the main features of zooming in Fennec. Currently there are no ways to disable this feature.

  • How to create a Real Time Interactive Business Intelligence Solution in SharePoint 2013

    Hi Experts,
    I was recently given the below requirements to architect/implement a business intelligence solution that deals with instant/real data modifications in data sources. After going through many articles, e-books, expert blogs, I am still unable to piece the
    right information to design an effective solution to my problem. Also, client is ready to invest in the best 
    infrastructure in order to achieve all the below requirements but yet it seems like a sword of Damocles that hangs around my neck in every direction I go.
    Requirements
    1) Reports must be created against many-to-many table relationships and against multiple data sources(SP Lists, SQL Server Custom Databases, External Databases).
    2) The Report and Dashboard pages should refresh/reflect with real time data immediately as and when changes are made to the data sources.
    3) The Reports should be cross-browser compatible(must work in google chrome, safari, firefox and IE), cross-platform(MAC, Android, Linux, Windows) and cross-device compatible(Tabs, Laptops &
    Mobiles).
    4) Client is Branding/UI conscious and wants the reports to look animated and pixel perfect similar to what's possible to create today in Excel 2013.
    5) The reports must be interactive, parameterized, slice able, must load fast and have the ability to drill down or expand.
    6) Client wants to leverage the Web Content Management, Document Management, Workflow abilities & other features of SharePoint with key focus being on the reporting solution.
    7) Client wants the reports to be scalable, durable, secure and other standard needs.
    Is SharePoint 2013 Business Intelligence a good candidate? I see the below limitations with the Product to achieve all the above requirements.
    a) Cannot use Power Pivot with Excel deployed to SharePoint as the minimum granularity of refresh schedule is Daily. This violates Requirement 1.
    b) Excel Services, Performance Point or Power View works as in-memory representation mode. This violates Requirement 1 and 2.
    b) SSRS does not render the reports as stated above in requirement 3 and 4. The report rendering on the page is very slow for sample data itself. This violates Requirement 6 and 7.
    Has someone been able to achieve all of the above requirements using SharePoint 2013 platform or any other platform. Please let me know the best possible solution. If possible, redirect me to whitepapers, articles, material that will help me design a effective
    solution. Eagerly looking forward to hear from you experts!.
    Please feel free to write in case you have any comments/clarifications.
    Thanks, 
    Bhargav

    Hi Experts,
    Request your valuable inputs and support on achieving the above requirements.
    Looking forward for your responses.
    Thanks,
    Bhargav

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • Premiere Pro and Adobe Media Encoder running slow

    Hello everyone,
    I'm having trouble with CS4, which is running significantly slower than CS3 did on a older machine. The CS4 suite is installed on a Dell Precision M6400:
    Windows Vista 64-Bit
    Intel Core 2 Duo CPU T9400 @ 2.53 GHz
    8 GB RAM
    NVIDIA Quadro FX 2700M Graphics with 512MB dedicated memory
    The OS is running on a 57.5 GB HD (C:) and the Adobe suite is installed on a 298 GB Solid State HD (D:), except for Adobe Media Encoder, which is installed on the C: drive.
    My project has 4 15-16 minute sequences. The sequences are in DV NTSC, 29.97 fps,  My scratch disks are set to a folder on the C: drive. Media Cache Files and the Media Cache Database points to a folder on the D: drive.
    These are some of the problems I'm having:
    - Premiere Pro CS4 generates peak files every time I open a project
    - It then takes 3-5 minutes to render before I can preview a 16-minute sequence
    - Adobe Media Encoder takes 5 hours to render a 16-minute sequence (as flv) that has been previously rendered
    - AME takes 1 hour to render every 15 minute sequence that has never been rendered before
    Are my settings affecting their performance? Is there any way to improve it? Thanks.
    (Premiere Pro is the only app that is slow)

    The data rate for replay is one thing, the data rate from disk to memory then from memory to CPU and back the other way are different matters and ought not to be confused. It is well-established that for a computer to edit AVCHD you need top end components, and note that I said there were three tasks to distinquish with increasing hardware requirements, namely merely replaying the video, specifying edits in the editor and then the rendering. It is commonly accepted by all the industry vendors that to do remotely commercial AVCHD rendering you need a minimum Quadcore CPU then that eats data fast, in order to not let it go to waste you need a fast motherboard bus fast memory and in order for none of those to go to waste you need the fastest disk set-up you can manage. I in fact have a 4-disk RAID0 volume using SATA (I think the disk model is SATAII but I have to await return from the repair center before I can confirm). For this RAID0 volume I have run speed test software from BlackMagic because I have one of their HDTV capture cards. It recorded that this volume which remember is doing parallelised IO is just fast enough to receive a encoded HDTV stream from the BlackMagic card but too slow to receive an uncompressed HDTV stream, indeed when I tried both I found the volume did keep up with compressed but fell behind with uncompressed. Remember that with a RAID0 volume of 4 SATAII disks a given file gets spread over the four disks and hence IO is spread over those 4 3G/s data lanes. Also remember with these disks 3G/s is just a burst speed, for AVCHD we are interested in sustained serial IO which is much less.
    Before my machine broke down, I found that it took 5 hours to render 33 minutes of HDTV albeit as it went along it transcoded from AVCHD to a Microsoft HD format for Vista-only. Another interesting thing is that I found that the longer this render ran the slower it became, the estimated time started at 3 hours but the actual was five and the last one third took maybe 3 hours. Because the machine broke after that run I couldn't figure the bottleneck. For my machine bear in mind that at the repair shop we found that the Quadcore had only half the necessary electrical power plugged in, the monitor software showed however that it constantly ran around 90% of whatever capacity that reduced power supply permitted. Now then we can puzzle over why it got slower and slower and yet CPU consumption remained consistent and near to full capacity, memory was not the bottleneck because that was constant at 6.4G. But you can say that this was maybe performing like a Dualcore and was hitting some sort of wall, if you had a 1 hour render with that rate of degeneration of performance factored in what would happen to the render time, and for 3 hours you could be running indefinitely. I hope when the machine comes back the correct power supply will make it behave like a Quadcore should for this type of application. Anyway I have two theories for the degradation. First is just that PrProCS4 was getting its knickers in a twist and thereby just doing more computation per minute of video to be rendered as time went by, maybe internal resource management related to OO-type programming maybe, or related to disk IO falling behind, both these theories have problems, for the latter the CPU usage should then have dropped also.
    Anyways, you need really a Quadcore system and blazing fast disk to work fully with AVCHD commercially, we found an external SATAII disk so if I were you I would just go get one and move on with your life.
    Message du 03/06/09 16:08
    De : "Jim Simon"
    A : "JONES Peter"
    Copie à :
    Objet : Premiere Pro and Adobe Media Encoder running slow
    For AVCHD you MUST have FAST disks.
    AVCHD actually has a lower data date than DV. You need lots of CPU muscle, but disk speed is really not a factor specific to AVCHD. Anything that works for DV will work just as well for AVCHD (and HDV as well).
    >

  • Video streaming stalls in full screen mode

    I'm not sure to be in the right category, but my Laptop runs in Tiger and I happen to use video streaming from different websides like ESPN Live and other websides with their own players. I use even some that require a program dowload to be able to see their content. By all those sources the video runs pretty smooth in a normal size. As soon as I put it to full screen it stalls in a certain rhythm no matter if the whole program is buffered or not.
    I achieved better results with one of the providers using WMP that via flip4mac streams into Quicktime. Just don't know if and how I could incorporate those other webside players into Quicktime. And if this would be the solution.
    I freed up some of my HD to achieve better performance. Have 18 GB free out of 100.. is this too little ?
    Do I need a better video card ?
    MacBook Pro (Intel) 100 GB 2.16 GHz 1.677 GB RAM
    Thanks
    bafomet

    Ba,
    It may be of interest to know that streaming video codecs are designed for computers of a specific age (CPU and GPU speeds).
    DVD quality streams MPEG-2 compression at about 10 Mbits/sec, and the older MPEG-1 compression, designed for NTSC television, streams at about 2 Mbits/sec. At 10 Mbits/sec, the processor must work faster to decompress & display the image. The latest compressions are based on MPEG-4. Even a G3 iBook could display MPEG-4 full screen; so I don't think you need a faster GPU.
    Microsoft codecs & containers, the last time I checked, were ancient: based upon MPEG-2. So, check your speed:
    DSL Reports speed tests
    http://www.dslreports.com/stest
    If you use Wi-Fi, IEEE 802.11n is fastest.
    Increase the cache size for your codec, if you can. It needs to be contiguous, so I dare to recommend iDefrag 2 (US$30) to clean your remaining disk space and obtain about 20 GB of contiguous space. Video display uses many operating system files, which are likely not fragmented; but you can also de-fragment the Windows codec if it lies in over 100 fragments. (The Spotlight database usually has over 1000 fragments; but pauses in it don't affect one's pleasure in using it.)
    My low-resolution DVD is interpolated by my graphics card, and look beautiful at maximal resolution. Streaming graphics apparently isn't programmed to interpolate low resolution: it demands high resolution and high bandwidth.
    If you can't increase the cache in WMP, you can lower the resolution of your monitor. That may be why this System Preference was designed to sit on the desktop's top bar.
    Bruce

  • Video playback controls in full-screen mode???

    I just downloaded a TV show and want to watch it full screen. Unfortunately, in full-screen mode I can't seem to figure out how to control the playback (e.g. Rewind a few seconds to hear something I missed).
    I tried all sorts of keyboard shortcuts to no avail. Am I missing something?

    Ba,
    It may be of interest to know that streaming video codecs are designed for computers of a specific age (CPU and GPU speeds).
    DVD quality streams MPEG-2 compression at about 10 Mbits/sec, and the older MPEG-1 compression, designed for NTSC television, streams at about 2 Mbits/sec. At 10 Mbits/sec, the processor must work faster to decompress & display the image. The latest compressions are based on MPEG-4. Even a G3 iBook could display MPEG-4 full screen; so I don't think you need a faster GPU.
    Microsoft codecs & containers, the last time I checked, were ancient: based upon MPEG-2. So, check your speed:
    DSL Reports speed tests
    http://www.dslreports.com/stest
    If you use Wi-Fi, IEEE 802.11n is fastest.
    Increase the cache size for your codec, if you can. It needs to be contiguous, so I dare to recommend iDefrag 2 (US$30) to clean your remaining disk space and obtain about 20 GB of contiguous space. Video display uses many operating system files, which are likely not fragmented; but you can also de-fragment the Windows codec if it lies in over 100 fragments. (The Spotlight database usually has over 1000 fragments; but pauses in it don't affect one's pleasure in using it.)
    My low-resolution DVD is interpolated by my graphics card, and look beautiful at maximal resolution. Streaming graphics apparently isn't programmed to interpolate low resolution: it demands high resolution and high bandwidth.
    If you can't increase the cache in WMP, you can lower the resolution of your monitor. That may be why this System Preference was designed to sit on the desktop's top bar.
    Bruce

  • Question which really disturbs you

    Hi experts,
    I have a question which is really tempting me to ask somebody. But due to its
    silly nature I never ask anyone. anyway im posting it now.
    I'm inserting million of rows in a table which generates enormous Redo records
    which makes redo buffer to fill. At one stage( possibly 1/3'rd of redo buffer) it is transferred to redologs, i.e. even uncommited transaction also gets logged in it.
    I didn't commit any transaction untill now. My transaction never ends, i continue to insert more and more million records which makes the redolog file to fill and tends to switch the logfile. My database is in archive log mode.
    my question is,
    1. whether the uncommited transaction is written to archived logs.
    2. what will be the status of the controlfile at this point of time.
    3. If i issue rollback, will i retrieve data from undo.
    4. How is the transaction gets synchronized in database.
    please reply me
    Thanks
    SM

    Oracle will 'empty' the redo buffer to theredo log file:
    - when it has filled to 1M
    - when it is 1/2 full
    - when a commit happens
    - every 3 seconds
    - when the DBWR is called to move out dirty blocks.
    This is documented at
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14220/process.htm#i7261
    One of the benefits - we can minimize the time required to return from a commit request.
    Remember that a commit request is simply a confirmation that the last remaining portion of a transaction's redo has been written to the redo log file. Once the commit has returned, we know that the redo information is safely tucked away, so other processes can use the resources we had been using.
    (At the same time - we do not 'release' any resources as part of a commit. Others will come along and look whether the resources are in use, will note that we no longer need them, and will release the resources for us. So a commit can happen really fast.)
    One common myth is that the tablespace's data files are important. They are nothing more than a backup (or materialized view) of the most recent contents of the database buffer cache plus any changes in the redo log file. By calling up the backup in the data files, we can avoid recreating a block from the initial format state and applying all redo. In an extreme case (called crash recovery) we actually do use a previous known image of the database blocks and apply redo.
    Oracle will create an archive copy of a redo log after switching from the redo log and before reusing it. It does not matter whether a commit has happened.
    During a transaction, the person making a change updates the block in place. A copy of the before image of the changed parts of the block (changed parts of the row) is placed in the UNDO or ROLLBACK area. Therefore, when a commit happens, the change is already in the block.
    When a person 'A' wants to roll back, the block is rebuilt to the 'start of transaction' state. This is expensive - Oracle is built on the assumption that commits must be fast and commits happen a lot more often than rollbacks.
    If person 'B' starts a a long query (say a table scan) on the same table as person 'A' is updating, but the scan gets to the block after person 'A' has updated the block, a 'consistent read' copy of the block will be created and the rollback will be applied to tyhe block. This ensures that person 'B' will always see the block the way it looked when his query started.
    This consistent read process fails with an ORA-01555 "snapshot too old" error, if:
    - person 'A' has committed before person 'B' could rebuild the block, and
    - there is not enough rollback space to keep person 'A's undo lying around, and
    - someone else grabbed the rollback because they needed it for an update.

  • Image storing algorithm at photo album sites

    Hi there;
    I am coding a image storeing website... Photo albums etc.But storing them in a folder system is getting complex for me... Whats the best way of storing image to folders?
    What i did up to now is: every member has its own main folder named with hex numbers... and i rename the uploaded images with hexnumbers again...
    The result is like:(first one is user-hex /second one is image-hex.jpeg)
    4324AFC3BC2390DE / 45F83AECB43B2A.jpeg
    its an unprofessional but simple working way...hard to predict the image hex if the image is published as private picture...
    If the user has 2000 photos it maybe hard for a single folder to index the photos...
    When i look at professional photo album sites webshots.com or flickr...
    Image folders are like 67/434/4543/345/4534/5443F432ACD342.jpeg
    so there are too many subfolders...But whats the logic and algorithm behind..How do they name these subfolders?how do they group images in folders?
    What can i do with JAVA with best performance on heavy traffic?The archiving mechanism must be fast on indexing and good for maintanance
    Please give me ideas or refer me some web links
    thank you
    BuraK

    If you are using a database to store information about the image then you can use the primary key (or an obsfucated version of it) as the name of the file.
    That way when you move images between folders/groups you don't have to move the physical file as its primary key stays the same. All that is needed is a DB update to change the folder/group relationship in the table.
    A side note: Many file systems have severe performance problems if you store too many files in a single directory. Subdirectories can really help. Use part of the primary key as a directory name and the rest as the file name.
    matfud

  • Connecting my iPod to my mac. firewire or usb, or both???

    my new iPod came with a usb connector. my older iPod (3 years ago) came with a firewire connector. i still have the old connector. can i connect using either my new usb or older firewire connector? which is better? is there any difference? why didnt apple ask me if i was a mac or pc user when i ordered it, so they could send me a firewire connector instead of the usb connector that they did send?
    usb or firewire? does it make any difference? firewire must be faster right?
    thanks for your help

    sync? what do you mean?
    Sync means to connect the iPod to the computer and load songs onto it.
    It will update/sync only with USB.

Maybe you are looking for

  • How to Map in SAP      ?

    Hi My client is Automobile industry.In the presales process my company delivers the goods to the dealer.On transit the goods are damaged.Instead of returning the goods the dealer him self repairs the goods and sends the credit memo to the company.The

  • UL are not showing up in my div correctly? CSS issue

    I can not figure out what I am doing wrong here. I had this working fine until I tweaked my site to get rid of the majority of tables on my site and optimize my pages for better peformance. If you take a look at the shopping support and learning cent

  • Printer Shows Not Installed

    I'm not sure where to post this, because I'm not sure of the problem.  I have a HP Photosmart Premium C310 Series.  I cannot print from Microsoft Word Starter 2010, which was installed on my laptop when I purchased it.  I just tried to print a WordPe

  • Color correcting auto white balance footage

    I shot some video yesterday and at the time of the shoot, I made sure to set my white balance, however after looking at the footage I notice the color temperature slightly shifts constantly. I don't know why this happened, but it's too late to invest

  • Lumia 730 dual sim heating and slow internet issue

    I want to know that will it be problematic if I resetting (hard reset)my Lumia 730 frequently?? Solved! Go to Solution.