TDMS Database Defragment

Hello all,
I am currently working to build a tdms database gaining data from network. What I am doing is trying to generate 2000 columns and 1 row data, and my program will keep updating these data per second and write all of them into the a tdms file, then defrag the file. This idea works fine for small-size files, however, if the file exceeds 2 GB, some error shows and my code will be forced to stop running. Is there any suggestions to deal with that problem? I am really curious that whether there will be soutions to defrag and control the size of the file when writing data to that file.
Thanks! 
Sunming
Attachments:
error for defragging.png ‏68 KB

Hello deppSu,
Thanks for your reply. The reason why I thought the error should have something to do with the size is because I have tried the same method for various file sizes, and the defragment time will increase with the size. Finally, it is impossible to finish defragging the file which is around 2GB.
Also, the VI I use to process defragment is only simply taking the file path and connect to the tdms file which should be closed correctly.
As for TDMS Flush, I have tried it, and it turns out to be very inefficient when writing data to tdms file.. 
Again, truly thank you for your advice. I am trying hard but have not figure out how to gain both small-size and efficiency.
Sunming

Similar Messages

  • TDMS database and treat its values as real time data.

    Hi,
    I'm newbie in using labview. Actually I wanna import data from TDMS database and treat it as real time data. Although I have done the first part which is importing the data inside the program by reading TDMS and selecting channels and groups, I couldn't find way for second part which is treating data as real time one. I tried Build Waveform, but it didnt work. Would you please help me to solve this problem ? I appreciate your helps.

    reza_amin,
    I don't understand your word "build waveform". If you write waveform data to a tdms file, you will read waveform data directly from the tdms file.
    If you are using LabVIEW 2012 (download from http://www.ni.com/trylabview), you could find a tdms example VI in "C:\Program Files\National Instruments\LabVIEW 2012\examples\file\plat-tdms.llb\TDMS - Concurrent File Access.vi". This example VI demonstrates how to write Sine Waveform into a tdms file and read the Sine Waveform from the tdms file.
    Hope this helps and enjoy your LabVIEW journey. :-)
    Best Regards,
    Bo Xie

  • Exchange 2010 Mailbox database Defragment

    Hi
    I have Exchange 2010 DAG with 4 Nodes .
    I want to know when shall i make offline fragment ?and what is the procedures ?
    thanks
    MCP MCSA MCSE MCT MCTS CCNA

    Hello,
    Please check if these applies:
    http://blogs.technet.com/b/rmilne/archive/2013/08/23/offline-defrag-and-dag-databases_2c00_-oh-my_2100_.aspx
    http://support.microsoft.com/kb/328804
    Since this forum is for general questions and feedback related to Outlook, if you need further assistance on this issue, I'd recommend you post your question to the forum for Exchange:
    https://social.technet.microsoft.com/Forums/office/en-US/home?category=exchangeserver 
    The reason why we recommend posting appropriately is you will get the most
    qualified pool
    of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Steve Fan
    Forum Support
    Come back and mark the replies as answers if they help and unmark them if they provide no help.
    If you have any feedback on our support, please click
    here

  • Mailbox Database Size/White Space Reduction in a DAG

    I have several large databases in which I'm moving mailboxes off of in an attempt to prevent my LUNs from running out of disk space. As well, I have several mailboxes of users who have left the company. My questions are as follows:
    1. When I run remove-mailbox -id "some mailbox" -permanent $true, the mailbox is deleted from Exchange relatively instantly. Will Exchange recoup the white space that is left behind after the mailbox has been deleted so that the
    database shrinks in size? Or will it just start writing over the white space rather than growing the size of the database?
    2. Perataining to mailboxes that have been moved, if "Keep Deleted Mailboxes for" is set to 4 days on the database that holds mailboxes that I'm moving off, will the database hold onto the mailbox (even though it has successfully been moved to
    another database) for the duration of the "Keep Deleted Mailboxes for" setting? Or will the mailbox be hard deleted after the move completes successfully?
    3. As part of Online Maintenance, will Exchange defrag a databse and decrease the white space, thereby shrinking the database size or will I have to take it offline to do that?
    4. What are the procedures to perform an Offline Defrag of an Exchange Databse? How risky is this?

    Hello,
    To answer your questions:
    1.   When and online database defrag completes a pass on the database it will reclaim this as whitespace in the database. It will not however reclaim space on the disk by shrinking the .edb file. An offline defrag is required for that.
    2.   Moved mailbox will not be hard deleted until the 4 day retention period has past, unless you manually purge them using Remove-StoreMailbox.
    3.   Related to the answer for number 1. Exchange will fill whitespace as mailboxes grow, but it will not reclaim disk space.
    4.   In a DAG you will you need to dismount the mailbox database and run eseutil /d against the active database copy. This is a relatively low risk procedure, but requires databases to be offline thus interrupting mailbox access. It can also
    take quite along time depending on the database size. The rate of defrag estimated by MS is 9 GB/hr. Here is a reference:
    http://support.microsoft.com/kb/192185/en-us
    For these types of situations my preference (If disk space allows) is to move all mailboxes to a fresh database then delete the database with excessive white space

  • High/Low sampling Database

    Need some Input and help for this plz.
    Im using shared variables to log data into TDMS database in Labview
    Im also using cFP and cRIO to log data from diffrent instruments(Flow,Pressure,Tempratur)
    I want to sample 5000 samples over 10 s from cRIO (500 Hz for pressure).
    At the same time i want to sample flow from cFP at 2Hz.
    My problem is how i can sample and store  both these in the same database using TDMS, and at the same time have them synchronized for logging later .
    Any Ideeas??

    Again, TDMS is not a database - by putting the word "Database" in the subject line you are limiting the number of people that will look at it...
    To do what you need should not be a problem assuming that you have timestamps associated with the data as it is acquired. The timestamping of the data will keep them aligned. For example, with a waveform you have a T0 value (when the acquisition started), a DeltaT (how much time there is between datapoints) and an array of data values. These values of the two waveforms will exactly define the time relationship between the two signals.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Script for unmounting/deframenting exchange 2010 database

    hi all,
    i need a script to unmount exchange 2010 database, defrag it and mount it again.
    thanks for any comment you may add...

    Tried the above commands... works like a charm, but with the following modifications:
    1. Delete trash and recoverable items from all mailboxes, run repeatedly
    The command using is
    Search-Mailbox -Identity "mailbox.name" -SearchDumpsterOnly -DeleteContent
    1A. Determining Free Space in Mailbox Database
    The command using is
    Get-MailboxDatabase -Status | ft name,Availablenewmailboxspace,Databasesize
    2.Calculate the disk space to perform the defrag, the formula is
    (Databasesize-Availablenewmailboxspace)*1.1
    3.Dismount the mailbox database
    In EMS, go to the folder contain the database file then use the command
    Dismount-Database <DatabaseName>
    3A.Copy mailbox database (edb and all related log files) to another location
    In EMS, go to the folder contain the database file then use the command
    4.Defrag the edb file (eseutil will recreate a temp edb and rename to the new edb after successful complete it will delete the old edb file. If anything goes wrong, it will revert back to the old edb as in the /d)
    eseutil /d <file.edb>
    5.Mount again
    Mount-Database <DatabaseName>
    You are advised to purge (step 1) all mailboxes before eseutil or move mailbox, otherwise it will just carry all the sh!t over to the new mailbox anyway. I gave up trying to tally the total mailbox space used, the whitespace avail and the edb file size already.
    For example, not sure why there is such a disparity where by I have 30 2GB mailboxe est. at 60GB used, whitespace after mailbox move/eseutil at 95GB, the edb size is 106GB. This is where my older exchange 2003 edb files wins hands down. Better budget more
    storage for edb files! I am considering whether it's even worth migrating all to exchange 2010 altogether! The edb files are so damn space hungry even with 3 day retention and 6 day delete with 24/7 maintenance ticked. Exchange 2010 edb beast simply cannot
    be tamed.
    Personally I find in EMC to online move mailbox faster and more reliable overnight. Furthermore, you can simply create a new database, sort all mailboxes by database, select all of them, follow the wizard to move them all at once, the system simply queues
    them to move and complete. Overnight moving is the best, all will be done the next day.

  • Pacman-defrag [obsolete - use pacman-optimize]

    After dp's discovery of a simple way to speed up pacman which can be found here I've come up with this little program. It should be pretty safe because it checks for a running instance of pacman before executing and locks pacman (2>/tmp/pacman.lck) during use so it won't corrupt files. I threw in the other options because I thought they might be usefull. A log will be created as needed in the /var/log directory as pacman-defrag.log.
    Move the file into /usr/bin/ , chown it root, and chmod 755.
    file and sums can also be found here
    #!/bin/bash
    ## CHANGE THESE VALUES ONLY IF THE PROGRAM COMPLAINS TO ##
    ###### SHOULD BE UNIQUE AN MUST NOT EXIST ALREADY! ########
    newtemp="/var/lib/new_temp/"
    oldtemp="/var/lib/old_temp/"
    ver='1.5'
    RED="33[1;31m"
    NORMAL="33[0;39m"
    log="/var/log/pacman-defrag.log"
    d=`date +"[%b %d %H:%M %Y]"`
    smes="Successful database defrag."
    fmes="ERROR: Unsuccessful database defrag, pacman lock detected."
    wuser="You must be root user to run this program."
    screenlog="New log file created as /var/log/pacman-defrag.log"
    notargetmes="ERROR: Unsuccessful database defrag, database directory did not exist."
    newtmpexistsmes="ERROR: Unsuccessful database defrag, temp target $newtemp already exists."
    oldtmpexistsmes="ERROR: Unsuccessful database defrag, temp target $oldtemp already exists."
    badmd5mes="ERROR: Unsuccessful database defrag, md5sums did not match. Old database restored."
    usage() {
    echo " pacman-defrag $ver"
    echo
    echo " pacman-defrag is a utility that safely defrags database"
    echo " files in /var/lib/pacman for Arch Linux's package manager,"
    echo " 'pacman'."
    echo
    echo " A log file will automatically created and can be found in"
    echo " /var/log/ as 'pacman-defrag.log'."
    echo
    echo " usage: $0 [options]"
    echo
    echo " {-h --help}"
    echo
    echo " {-v --version}"
    echo
    echo " {-d --defrag} Defrags database safely by checking for"
    echo " locks created by pacman while locking"
    echo " itself from it so both can't run together."
    echo " This should be your default option."
    echo
    echo " {-f --force} Force the defrag ignoring pacman lock. If"
    echo " no lock is found, one will be created to"
    echo " protect pacman. BE CAREFULL WITH THIS!"
    echo
    echo " {-r --rforce} Same as force except the lock will be"
    echo " removed after use."
    echo
    exit 0
    version() {
    echo "version $ver"
    exit 0
    idcheck(){
    [ `id -u` -ne 0 ] && {
    echo "$wuser"
    exit 1
    check(){
    if [ ! -d /var/lib/pacman ]; then
    echo "Error: database directory does not exist."
    echo "aborting."
    smes=$notargetmes
    logger
    exit 1
    fi
    if [ -d $newtemp ]; then
    echo "Error: $newtemp already exists. Please reset newtemp= near the top of"
    echo "this script to a unique directory that does not already exist."
    echo "aborting."
    smes=$newtmpexistsmes
    logger
    exit 1
    fi
    if [ -d $oldtemp ]; then
    echo "Error: $oldtemp already exists. Please reset oldtemp= near the top of"
    echo "this script to a unique directory that does not already exist."
    echo "aborting."
    smes=$oldtmpexistsmes
    logger
    exit 1
    fi
    movecopy(){
    echo
    cd /var/lib
    tar -cf old.tar pacman/
    md5sum old.tar | sort > old.sums
    cp -rp pacman/ $newtemp
    mv -f pacman/ $oldtemp
    mv -f $newtemp pacman/
    echo -e "checking integrity... c"
    tar -cf new.tar pacman/
    md5sum new.tar | sort > new.sums
    olddb=`cat old.sums | cut -d' ' -f 1`
    newdb=`cat new.sums | cut -d' ' -f 1`
    echo $olddb > olddb.sums
    echo $newdb > newdb.sums
    diff olddb.sums newdb.sums
    if [ $? -ne 0 ]; then
    echo "FAILED."
    echo -e "restoring old databases... c"
    rm -r pacman/
    mv -f $oldtemp pacman/
    smes=$badmd5mes
    else
    echo "OK."
    ##### UNCOMMENT TO SEE SUMS ####
    #echo $olddb
    #echo $newdb
    echo -e "completing defrag... c"
    rm -r $oldtemp
    fi
    rm -f old.tar
    rm -f new.tar
    rm -f old.sums
    rm -f new.sums
    rm -f olddb.sums
    rm -f newdb.sums
    logger() {
    if [ -f $log ]; then
    echo -e "Logging event... c"
    echo -n "$d " >> $log
    echo "$smes" >> $log
    echo "done."
    else
    echo -n "$d " >> $log
    echo "$smes" >> $log
    echo "$screenlog"
    fi
    exit 0
    defrag() {
    idcheck
    check
    if [ -f "/tmp/pacman.lck" ]; then
    echo -e "${RED}Pacman lock detected!${NORMAL}"
    echo "If you are sure pacman is not being used, run this program again using -r."
    echo "aborting."
    smes=$fmes
    else
    2>/tmp/pacman.lck
    movecopy
    rm -f /tmp/pacman.lck
    echo "done."
    fi
    logger
    force() {
    idcheck
    check
    if [ -f "/tmp/pacman.lck" ]; then
    movecopy
    else
    2>/tmp/pacman.lck
    movecopy
    rm -f /tmp/pacman.lck
    fi
    logger
    rforce() {
    idcheck
    check
    if [ -f "/tmp/pacman.lck" ]; then
    movecopy
    rm -f /tmp/pacman.lck
    else
    2>/tmp/pacman.lck
    movecopy
    rm -f /tmp/pacman.lck
    fi
    echo "done."
    logger
    case $1 in
    -h | --help) usage ;;
    -d | --defrag) defrag ;;
    -v | --version) version ;;
    -f | --force) force ;;
    -r | --rforce) rforce ;;
    *) usage
    exit 0
    esac
    # end of file

    IceRAM wrote:The script has probably cached into memory all pacman files, that's why pacman is so responsitive. Try searching using pacman after a reboot. Compare times then.
    After reboot it's still improved.
    To start with, your pacman db is going to be around a similar spot on the hard drive. but over time as it changes, packages come and go and whatever, it gradually gets spread across the hard drive, with latter things ending up as far as the other end of the hard drive.
    this causes the hard drive's head to have to whip back and forth between platters and all around the place.
    the idea of this, is that it makes a copy, which is generally all together, or close together, so then the hard drives head doesnt have to go nuts everywhere, and it does make a difference
    iphitus

  • UCCX 5 SQL Dbase Query Issues - Slow Returns

    For several months it seemed to me that our UCCX 5 system was beginning to have trouble. At first it appeared as a problem with our wallboard system being unable to display the real-time data properly. I had deduced that queries to the db_cra database were taking 4 - 5 minutes to return data.
    At first I called on the vendor for our wallboard, but they were unable to help stating that it was a UCCX database issue - and I agreed. I then went to Cisco TAC and fed them all my information and asked them for help. They also were unable to help figure out what was happening, but they could see that the SQL queries that used to take 2 seconds were now taking several minutes. Since the wallboard system queried the database every 10 seconds, it was over-taxing the database.
    This began to cause little anomalies in things like call-distribution, dropped calls, slow to no Historical reports returns, and this ultimately began to cause failovers to our backup node. I could not go a day without a failover and I had to reboot the servers almost every night.
    I felt there was no solution and was afraid we needed to replace our system.
    After talking to all the engineers around the world, my solution came from our very own SQL administrator. Within 5 minutes he was able to resolve all our issues.
    The root of our problem was that the indexing of the db_cra database was extremely fragmented.
    He wrote a SQL sript that he had me run against the database and he told me that this would correct any indexing issues.
    It worked like a charm.
    The system stopped behaving poorly immediately. This resolved many of our problems that I had been living with for a long time. It made the Admin page work so much faster, call distribution issues halted, the wallboard is running faster than ever with real-time data, and the list goes on. This resolved so many smaller issues that I cannot even list them all.
    I would advise anyone using a SQL database, whether it be a Cisco product or not, to always keep their database 'defragmented'.
    My SQL admin also setup the script to run automatically, on a schedule to keep our database running like a Ferrari. It has been sweet ever since.
    KEEP YOUR INDEXES CLEAN AND YOU WILL REAP HAPPINESS!

    I was equally shocked that there was no escalation, as was my boss and my Cisco rep. They let me go with no resolution and hung-up the phone leaving me with instructions to have some-one else take a look at the database. It was very unexpected, especially considering how close they were to the point when we were trouble-shooting.
    The TAC SR # 611904619 - IPCC Database Not Returning Query Data.
    With this resolution, it also raised to my mind the issue of Informix. We are using CUCM 6.1.2 and are concerned about the same issue happening there and we would like to determine if this can happen there also, and if there is a re-indexing tool available for Informix.

  • WSUS Hangs on "Unused updates and update revisions"

    In our environment we have not cleaned up WSUS updates in a while, probably about a year. I have been tasked with cleaning it up and optimizing it. When I attempt to run the WSUS Server Cleanup Wizard, it hangs on "Unused updates and update revisions"
    and doesn't do anything. The wizard then disappears and I am taken back to the WSUS Console where I see the following:
    Error: Database Error
    When I click copy error to clipboard, this is what is copied:
    The WSUS administration console was unable to connect to the WSUS Server Database.
    Verify that SQL server is running on the WSUS Server. If the problem persists, try restarting SQL.
    System.Data.SqlClient.SqlException -- Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
    Source
    .Net SqlClient Data Provider
    Stack Trace:
    at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous)
    at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args)
    at Microsoft.UpdateServices.UI.SnapIn.Wizards.ServerCleanup.ServerCleanupWizard.OnCleanupComplete(Object sender, PerformCleanupCompletedEventArgs e)
    In my research, I have found several solutions, and have tried a few of my own. Below I will list the ones I have tried: 
    1. Run each item individually. IE, run superseded updates, then run expired updates after that finishes, then run unneeded update files after that finishes, etc. This did not work
    2. Stop the WSUS IIS service and run "wsusutil.exe removeunneededupdates" this also did not work
    The solutions I have found, but am hesitant to try (due to the fact that we are in a live production environment) are the following:
    1. Re-Index the WSUS Database according to the following link: 
    https://technet.microsoft.com/en-us/library/dd939795(v=ws.10).aspx
    2. Deleting the files in the WSUS folder and running "wsusutil.exe reset" from the following link: 
    http://windowsitpro.com/windows/quick-tip-resetting-wsus-server
    I would like to know what, if any, impact either of these solutions will have on our environment, and which one would be the preferred method. I feel like I should use the first one, and if I still don't have any luck then try the second. My only concern
    with the second method is that it might break something (not sure what, except updates). 
    A little about our environment - 8,000 computers, WSUS and SCCM are on the same Server 2008 R2 server, we only have the one WSUS and SCCM server, so nothing downstream as far as WSUS is concerned. 
    Thanks everyone!

    UPDATE:
    I have started going through and declining updates, and then going through SCCM and removing any expired updates from the SCCM side as well. I am now down to 3822 unapproved updates in WSUS, with a total of 13304 updates.
    At this point I am STILL UNABLE to run the cleanup wizard to clean up any of these. It still hangs when it runs, and only on Unused Updates and update revisions. I have even tried running powershell scripts and command line utilities to clean WSUS up, and
    still just hangs. I am pretty much at a loss, so any other advice would be appreciated.
    One more question, should it resort to this: What impact does resetting WSUS have on a live environment, with 7000 plus computers? Should I have to resort to it, is this something that is going to cause any major issues? 
    We do plan on preventing this in the future by running the database defrag once a month, and running the WSUS Server Cleanup Wizard once per quarter, but getting there sure is proving to be quite the challenge.
    Thanks again!

  • Running Activate Database Views activity in parallel in TDMS 4.0

    Hello Colleagues,
    We are running TDMS4.0 and as we need to hasten the process, we would like to ask if the activity ACTIVATION OF DATABASE VIEWS can run in parallel with activity DATA SELECTION START  FOR TDMS HEADER TABLE? In the guided procedure, the ACTIVATION OF DATABASE VIEWS is dependent on the completion of DATA DELETION START IN RECEIVER SYSTEMS (which is already completed as per screenshot)
    As per searching other KBA's database views are activated that may be de-activated after drop of tables in the receiver system. Which was why the prerequisite of this is completion of DATA DELETION START IN RECEIVER SYSTEMS.
    Thus, we would like to know if the activity ACTIVATION OF DATABASE VIEWS can run in parallel with activity DATA SELECTION START  FOR TDMS HEADER TABLE? Kindly see screenshot
    Many thanks,
    Meinard

    Hello Meinard,
    Yes, you can start activity "Activation of data base views" in parallel to activity "Data selection start for TDMS header tables". Once activation of database views is completed you can start "Data Transfer" activity.
    Thanks
    Anita

  • Administer defrag remotely for Oracle lite client database?

    Hi all,
    My client has installed Oracle Lite version 10.3.0.3. Oracle Lite Database file (ODB) tends to grow too large. Oracle recommends resizing the Oracle Lite database by running Defrag.exe tool. But it has to be installed manually on each handheld device. Does anyone know how to administer Defrag.exe remotely from Oracle Lite Server or can it be done in another way?
    Thanks.

    user6998712 wrote:
    Hi all,
    My client has installed Oracle Lite version 10.3.0.3. Oracle Lite Database file (ODB) tends to grow too large. Oracle recommends resizing the Oracle Lite database by running Defrag.exe tool. please post URL where this is documented

  • TDMS Defragment​-Through Windows Command Prompt

    Dear All,
    we need to Defragment the TDMS file  by TDMS Defragment.VI  via Windows Command Prompt.
    Work flow will be as below
    1.An Application wil be created to defragment using TDMS Defragment.VI
    2. When user stop Main Application (App 1) which records data for long time in tdms  ,the Main application (App 1)  will call defragmenting Application (App 2) via cmd and should get the file path and start defragmentation and it should pop a message defragmentation completed. and dont want Main Application  (App 1)  to do anything apart from triggering tdms defragmentation.
    Reason is We dont want the Main Application (App 1)  to wait for Long time till defragmentation and Since we need to use the Main Application (App 1)  for other testing.
    So we decided to do this defragmentationin in some other way  and display the completion message to user
    please provide suggestions 

    I'd also recommend doing this in one application.  If you truely want to do it from a command line you can.  LabVIEW built EXEs can support command line switches if you enable it in the build specificaitons.  If you do this you'll probably want to support multiple instances, and code it so that it will exit once the conversion is done.  But again you can get more information from the result of the defrag (like error) if you do it in the same application.
    Taki1999 wrote:
    EDIT: Asynchronously is difficult to spell
    Totally agree and some spell checks claim it isn't a word.  Same with programmatically.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • Error 6 sometimes with TDMS Defragment VI (LV 8.2) (WinXP)

    I sometimes get error 6 when I run the TDMS Defragment VI. This VI is run in my program right after I close the file after writing it (to a network drive). Could that be the problem? Does the file need time to close before the defrag is started? If so, is there any way to know when the file is ready?
    George

    Hi George,
    I could not reproduce this error with the tdms file saved either on a local drive or network drive. I will suggest testing the VI with the tdms file on the local drive. You can also place a flat sequence structure with time delay (100ms) between the TDMS Close and Defragment VIs.
    Tunde

  • Perf. tip (windows): defragment database files only

    Hi,
    since the database files of LR are large and are heavily accessed, it is good idea to defragment those files frequently. With the defrag built into XP and vista you can however only defragment a whole disk and this can take a considerable amount of time. Below is a link to a (free) command line tool that can defrag only a single file and does an excellent job on that!
    http://www.microsoft.com/technet/sysinternals/FileAndDisk/Contig.mspx
    One option is to put a link to contig.exe in the 'send to' folder and then manually defrag the LR database files ('Lightroom Library.aglib' and 'Lightroom Previews.lrdata\thumbnail-cache.db' ) or setup a special task in the windows task scheduler to do it automatically.
    After importing pics, my 500MB aglib file was spread all over the disk with more than 800 fragments and contig.exe reduced it to 1 fragment after ~30s. Speed improvement after the defrag was quite noticeable.
    Another good idea is to switch of windows file indexing when using LR, especially during a large import. You should set your anti virus software to ignore the LR library files. Doing all this did make quite a difference for me! Hope it helps!

    Ok, I think I found my problem.
    oracle-xe configuration runs a script called XE.sh to create the databse in
    oradata/XE. This script is failing because the init.ora and initXETemp.ora
    have two parameters which are not being set.
    I changed these:
    sga_target=%sga_target%
    pga_aggregate_target=%pga_aggregate_target%
    to:
    sga_target=209715200
    pga_aggregate_target=69206016
    in both .ora files. Erased /etc/default/oracle-xe and reconfigured.
    Now the database is created in oradata/XE.
    If anyone knows why these parameters are not being populated or if these
    numbers are not correct please let me know.
    Thanks
    -- Ramiro

  • Exchange 2007 Database mount error after server crash during defrag.

    Most likely but I have copies of the orignal and the temp still.
    Is there a way these can be used?? I was hoping not for a MS Support answer... =/

    Help!
    Half way through copying the temporary database edb file my exchange server crashed.
    I now get the following errors when trying to mount the database.
    Error:
    Exchange
    is unable to mount the database that you specified. Specified database:
    Mail Store; Error code: MapiExcetionJetErrorPageNotInitiated: Unable to
    mount database. (hr-0x80004005, ec=-1019)
    I still have the temp
    database file from the finished defrag and a copy of the original
    database from before it was done. But need some urgent help to get my
    system back before tommorow!
    I do not have circular logging enabled.
    This topic first appeared in the Spiceworks Community

Maybe you are looking for