ZPool performance during resilver

Hi everyone. Hoping someone can provide some guidance here.
We are trying to develop some low-end NAS devices using Solaris 11.1 to house virtual machines. Between the RAID configuration and the SSDs we have been able to obtain very solid performance under max load. The problem is when a drive fails.
Disk latency, more precisely read latency, goes extremely high. An example is we have one recently built and not housing many machines. start a resilver, read latency jumps from 25ms to 235ms. A bit of a problem for VMs.
1.37TB used on a 14TB pool, almost 200GB to resilver.
Hardware controllers in my experience are bale to rebuild a drive and the performance impact is not noticable. Different situation I understand.
I am trying to locate a tunable or some other suggestion to help resolve this issue. I suppose the same can be said for scrubs. We currently do not perform scrubs because it brings the system to its knees.

Thanks again for your feedback.
If resilvering cannot be stopped then Orcale may want to clarify it's documentation: http://docs.oracle.com/cd/E19082-01/817-2271/gbcus/index.html
"Resilvering is interruptible and safe." I guess this is only in terms of an accidental interruption like a power outage.
I also ran the iostat -En and there are a lot of errors. Small portion below.
root@pop-tac-san02:~# iostat -En
c9d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: WDC WD800AAJS-0 Revision: Serial No: WD-WCAV2N5 Size: 80.03GB <80025845760 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
c10d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: WDC WD800AAJS-0 Revision: Serial No: WD-WCAV2N4 Size: 80.03GB <80025845760 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
c7t21d0 Soft Errors: 0 Hard Errors: 11 Transport Errors: 23
Vendor: ATA Product: WDC WD2001FASS-0 Revision: 0101 Serial No: WD-WMAUR0278223
Size: 2000.40GB <2000398934016 bytes>
Media Error: 9 Device Not Ready: 0 No Device: 2 Recoverable: 0
Illegal Request: 12 Predictive Failure Analysis: 0
c7t22d0 Soft Errors: 0 Hard Errors: 5 Transport Errors: 19
Vendor: ATA Product: WDC WD2001FASS-0 Revision: 0101 Serial No: WD-WMAUR0371013
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 5 Recoverable: 0
Illegal Request: 10 Predictive Failure Analysis: 0
c7t23d0 Soft Errors: 0 Hard Errors: 7 Transport Errors: 28
Vendor: ATA Product: WDC WD2001FASS-0 Revision: 0101 Serial No: WD-WMAY00096167
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 7 Recoverable: 0
Illegal Request: 9 Predictive Failure Analysis: 0
c7t24d0 Soft Errors: 0 Hard Errors: 4 Transport Errors: 16
Vendor: ATA Product: WDC WD2001FASS-0 Revision: 0101 Serial No: WD-WMAY00129943
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 4 Recoverable: 0
Illegal Request: 10 Predictive Failure Analysis: 0
I'm no expert but I'm guessing this is causing my performance problems during resilver.
Edited by: 991704 on May 22, 2013 10:34 AM

Similar Messages

  • How do you log SAP vendor activity performed during an OSS Connection?

    Hello Everyone,
    What is the best and cost effective way to log SAP vendor activity while they are connected to perform OSS related work?
    The environment under review provides an OSS SAP support ID and password to the vendor to facilitate the authentication, however, no security audit logging is performed. The SAP team is raising performance issues as the reason why the logging is not turned on.
    Is there a way to restrict the security logging only to the OSS support ID and also determine the activity performed during the session? Basically, how can the internal team know that an unauthorized activity was performed during the connection and / or someone made several failed attempts to authenticate through the OSS connection?
    Thank you.

    It also records the GUI event and is regardless of the success of starting the activity, let alone completing it.
    In my books it is "fluffy" security which is inherently flawed for the purpose you are using it for. The main reason for the records is response times.
    It only works because authorization concepts are sometimes inherently flawed in their implementation as well.... so people believe what they see...
    But for forensics it is usefull IF you are fast enough...
    Cheers,
    Julius

  • Is there a way to improve computer performance during TM backup?

    I have a MacBook Pro 2.4GHz Intel Core 2 Duo with 2GB of RAM running 10.5.7 and every time Time Machine starts a backup (about once an hour) my computer becomes unresponsive. I can't type, copy, paste, switch applications without a huge hang up. For example, it took me a total of 7 minutes to get to this thread because Time Machine started a backup and I was stuck in the Finder trying to switch to Safari. Once Safari finally showed up, I got a beach ball.
    When TM isn't running, my computer is restored to a responsive state.
    I love using Time Machine, because it's a great way to backup everything. Short of turning it off, is there anything else I can do to prevent my computer from going in to lockdown once an hour?
    I'm backing up wirelessly to a MyBook USB drive btw.

    Just an update to those who may stumble across this thread.
    This issue has not been solved. In fact, I could say it's gotten much much worse. It seems that I can't type, copy/paste or perform even the most basic of tasks while a backup is in progress.
    I've downloaded the Time Machine Backup Widget, and it shows a few oddities that may/may not be significant. On one particular day the log shows a request for 1.3GB and then for the next backup (1 hour later) a request for another 2GB in size. Each of the hourly backups seemed to be of similar size requests for that day. What's unusual is that for that particular day, I wasn't really working on anything of significant size at all. I'm a web developer, so I may modify a 1MB file throughout the day, for several hours, but I'm no where near generating modifications that would equate to 8GB a day. My first thought was that Time Machine might be swapping out a weekly/monthly backup at that time - but then the same requests seem to be coming the following three days. Does it take Time Machine that long to perform a weekly backup?
    At Time Machine's current rate, my hard drive is filling up pretty fast. To the tune of 8GB a day. The logs seem to also indicate that Time Machine is only copying 2.1MB a day. Again, that falls in line with what I would expect with my activity, but the hard drive seems to be dedicating the "requested" rather than the (smaller) copied amount indicated in the log files.
    Backup frequency also seems to be higher than normal on some days. Today, for example, Time Machine has been backing up about once every 20 minutes or so. Right now, I'm on a backup, at various times my typing has been affected and according to the backup status it has been backing up 2.4MB of 2.5MB for the past 22 minutes. It just barely finishes and then it starts right back up again.
    I've kept Activity monitor running for the past couple of weeks and as suspected, my CPU reaches maximum capacity and my disk activity goes through the roof during a backup. My external wireless mouse and anything connected to my computer seems to become unresponsive. The trackpad and keyboard go next. Then - typically - I see the beach ball and I know I'm up for an unscheduled computer break. This is most annoying when I'm trying to type an email and a backup kicks in just before I have a chance to click send. One time it took me 10 minutes for my click to be recognized and an email could be queued to send. It took another 3 full minutes for the email to actually go out.
    For me, backups are essential. I have resisted turning this feature off, simply because I spent so much money to get it all to work (upgraded OS, purchased a new router and a new backup HD).
    I hope Apple releases more control over the backup features and the backup frequency. I'd love to see an option to have backups happen only when CPU cycles are at idle for a few minutes, which would indicate that the user isn't on the computer. Or perhaps give users the ability to set a less aggressive backup schedule, perhaps every four hours or so, rather than the aggressive hourly backups I'm stuck with now.
    Time Machine's current status: "finishing backup." has been it's status for at least 10 minutes, that combined with the outlandish 45 minute time it took to copy 2.5MB of data means I have just 5 minutes before my next backup is scheduled to start. There has to be a better way.

  • Steady lack of performance during ownership.

    Since my last post (http://discussions.apple.com/thread.jspa?threadID=992709) the performance of my Macbook Pro has gotten worse.
    Besides the problem mentioned before, general graphical performance of my mac during random computergames and other graphic-dependent software, the performance of my computer is really not matching its hardware specifications to the best of my experience.
    Furthermore, when doing small tasks, like typing in a text field, as in an IM-window or even this formfield right here, after just a few characters, the system slows down and lets me type letters alot faster than they will actually appear on screen.
    Even more, interacting with games such as World of Warcraft together with communication software like Ventrilo, constantly screws up the system audio of my machine. Microphone in- and output is distorted to earpiercing noise, and leaves my system hanging for great periods of time, and even sometimes crashes it in kernel panics.
    These kernel panics, though, seem to be caused by the game software and not the hardware, since I have heard from many people suffering from these crashes. But the rest of the issues described here is not something that people with similar systems has experienced, which is why it has lead me to believe that something has been malformed in my hardware.
    To make a long story short - going from being able to multitask tons of heavy software at a top-notch performance, my mac is reduced to a shadow of what it once was, and I have treated it with great care.
    A supporter from the Mac-division of Blizzard entertainment suggested after a long talk that my machine could have sustained damage from overheating at some point, causing this trouble, and I should have my computer examined by Apple.
    Since there is a fee involved with having Apple look at my computer, should the error not be hardware, I would like to know if someone could suggest me to check some settings in the system profiler that could verify or unverify that something is wrong at the hardware, and what I should do if Apple should check it out.
    PS: I have tried almost every "tweak" that is suggested to increase performance. Bottom line is that I have not altered much of my system between the amazing performance and the horrible performance.
    Thanks in advance, Kspr

    As an update I tried installing Windows XP on boot camp, to see if there would be any change in performance.
    After having partitioned the drive for about 1 minute, my machine had a complete kernel panic crash.
    Having lost 15GB in thin air, I recovered it by booting to disk mode and doing a "repair disk".
    Recovered the lost size, and tried again. Kernel Panic once again. Repeated it just to be sure a third time, kernel panic again.
    I'm positive somethinge's very wrong. Any thoughts on this?

  • Sluggish performance during disk operations

    I've got a PC with a SATA hard disk and Arch is installed into a partition which uses the Ext3FS. During any longer disk operations the performance degrades near to a level of unusability. The movement of the mouse pointer is very choppy, switching between windows or desktops takes several seconds and so on... It's of course understandable that this would happen if I tried to access the disk with some program while the operation active, but this happens even if I'm trying to access something that is loaded into the RAM. And all the time most of the RAM remains free.
    This doesn't happen while doing similar things in Windows, so I guess I must have just configured my kernel badly or something. Any advice on where to start solving this?

    Thanks for your reply.
    I've enabled the correct chipset while customizing kernel (Intel ICH6, without which Arch won't even boot, so it should be correct). I've also enabled DMA, but from certain locations I've understood that DMA is only for IDE devices and doesn't affect SATA performance.
    I did some poking on some other forums and found out that this problem has been pestering other people too, but no solution has been found (at least I don't know of any).

  • Question about Workstation Performance during Scanning

    I am looking for information about workstation performace during
    scanning
    for inventory. My management wants information on impact before I can
    implement the service in our environment. We do have a very wide
    array of
    computers. Anything from Win95 P200 to Win2k/XP P4 3Ghz. Can anyone
    point me
    in the right direction?

    Inventory is minimal.
    The first scan takes the most performance. On a p200 we are talking
    about 5 minutes with little or no performance hit.
    After that, only changes are sent to the server.
    I wasn't able to find any doc that talked about performance, but you
    can read about inventory in the ZFD documentation.
    Hope that helps.
    Jared L Jennings, CNE
    Novell Support Forums SysOp

  • Aperture 1.1 performance during longer work time

    Hi,
    I have Aperture 1.1 running on a G5 2.3GHz dual core with a 7800GT and 8GB RAM and on my MacBook Pro 2.16GHz with 2GB RAM.
    Aperture runs fine and the performance is good on both machines, after staring a work session.
    But after a few minutes and several pictures Aperture slows down and sometimes the spinning ball shows up.
    I don't know why this performance change during work happens.
    Any indea?
    I also noticed, that the 1.1 version has a higher CPU usage than the versions before, but I don't know if it still uses the GPU as much as before.
    JO

    Same here, Aperture 1.1 does seem to be more of a CPU hog now than it was before -- incl. a nasty new behavior where it cripples other programs from running or rendering to screen while it is running and "active", even when it is in the background. I also see what others have reported where after 15-30 minutes the Aperture (1.1) perf gets progressively more dismal until the program becomes unusably slow, and/or freezes at the SBBOD.
    I'm not that convinced memory is the root cause of the perf. problems, as it doesn't appear to be notably worse than 1.0.1 in its usage, nor does its usage radically change during the perf. problems versus prior to them.
    Given both behaviors, I wonder if Aperture is internally competing against its own threads (as well as those of others, though others appear to have lower-priority access) for some critical system lock that each thread is holding too long, or something along that line of pathology. Just speculating, but it seem a closer fit with what's happening inside and outside Aperture while running (impacting other programs' perf as well), versus just rampant memory leakage.
    In any event, after about 15-30 mins, Aperture's perf frequently degrades to where it stops being usable (along with the rest of the system while it is running). Other programs running simultaneously with Aperture will start hitting serious perf issues well before Aperture freezes (indeed often before Aperture shows any signs of problems). 1.1 solved many of the big RAW problems, but seems to have introduced some nastily fatal flaws of its own.
    Power Macintosh G5 (dual 1.8G)   Mac OS X (10.4.6)   3GB Ram, ATI 9800Pro

  • S_ALR_87013531 how to increae its performance during execution

    Dear sir
    How to increase the performance of  this eport  S_ALR_87013531 during execution,since it taking six hours or gives run time exceeded error.
    Regards
    kunal

    Hi,
    I dont think it should take 6 hrs to execute this report.
    Please check with your BASIS consultant.
    Muzamil

  • NVIdia Powermizer on Laptops - Slow Performance during gaming

    Hope this is the place for this post...
    I'm currently running nvidia 180.29 and I have the following set in my xorg.conf file:
    Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x3"
    The problem I see occurring is that; during gaming the performance level will move from what I have it set to above (Level 2) and dropping to a lower level performance.  This; of course, causes a HUGE drop in frames while gaming.  I've tried EVERYTHING I researched but nothing seems to make this sticky.
    I have tried:
    1.  Adding options to /etc/modprobe.conf (Setting nvidia stuff I found googl'ing around)
    2.  The above options added to my device sections of my xorg.conf file
    3.  Running this script that some said would keep it sticky:
    while true; do
    powerstate=`cat /proc/acpi/ac_adapter/AC/state | awk '{print $2}'`
    if [ $powerstate = "on-line" ]; then
    nvidia-settings -q all > /dev/null
    fi
    sleep 25;
    done
    All attempts have failed and my performance is HORRIBLE as long as the game is running.  When the game quits the performance returns.
    Any suggestions, information, links..etc would be greatly appreciated.
    Thanks in advance...
    EDIT:  See alot of people complaining about 180.29 and powermizer.....does anyone know if it's broken??  If it is...can someone post a quick link to how to downgrade?
    Last edited by johnnymac (2009-03-15 02:39:56)

    You can try to install some older drivers from http://www.nvidia.com/object/unix.html find the driver for your PC, select archiver, run the script... Should work.

  • Slow Query Performance During Process Of SSAS Tabular

    As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
    becomes very slow. 
    Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
    I am using SQL Server 2012 SP2.
    Thanks

    Hi AL.M,
    According to your description, when you query the tabular during Process Add, the performance is slow. Right?
    In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
    One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Siebel Prod App poor performance during EIM tables data load

    Hi Experts,
    I have a situation, Siebel Production application performance is becoming poor when I start loading data into Siebel EIM tables during business hours. I'm not executing any EIM jobs during business hours so how come the database is becoming slow and application is getting affected.
    I understand that Siebel Application fetches data from siebel base tables. In that case why is the application getting very slow when EIM tables are only loaded.
    FYI - Siebel production Application server has good hardware configuration.
    Thanks,
    Shaik

    You have to talk with your DBA.
    Let's say your DB is running from one hard disk (HD).
    I guess you can imagine things will slow down when multiple processes start accessing the DB which is running from one HD.
    When you start loading the EIM tables, your DB will use a lot of time for writing and has less time to serve the data to the Siebel application server.
    The hardware for the Siebel application servers is not really relevant here.
    See if you can put the EIM tables on their own partition/hard disks.

  • Zpool performance drop when more the 90% utilisation of a vdev

    I notice that when a vdev of a pool is more then 90% used, the performance of that pool drops and kernel utilisation increase sharply.
    We are using solaris 10 u7, I have the following vdev in my pool:
    cds3-devl-01-z1 1.09T 876G 76 100 3.94M 3.83M
    c3t600A0B80002994960000086649A5024Fd0 242G 5.57G 10 7 486K 167K
    c3t600A0B80002994960000086749A67E47d0 242G 5.76G 11 8 510K 189K
    c3t0020C24001073764d0 1.29M 63.5G 0 41 0 1.88M
    c3t600A0B80002994960000087E49FEC2CDd0 242G 6.39G 12 11 622K 267K
    c3t600A0B800029979E000009D14A2627BCd0 140G 108G 17 13 989K 585K
    c3t600A0B8000299496000008814A26286Dd0 130G 118G 15 12 902K 522K
    c3t600A0B8000299496000008844A2D0897d0 116G 632G 15 10 928K 472K
    I have some vdev that are low in free space, but hi have some that have plenty of it.
    zfs keeps writing on the ones where there is low spaces that generate a lot of kernel activity when it'looking for free space.
    Is their a way to tell zfs not to write anymore to does vdev so we can use the other vdev?
    The only way that I see to resolve my problem is to do a backup/restore and that should spread the space used.
    a zfs reorg or defrag would me nice !!

    ZFS tryes to keep the utilziaion even between all vdevs of a single pool. Check your pool with * zfs iostat*
    # zpool iostat -v
                                                capacity     operations    bandwidth
    pool                                      used  avail   read  write   read  write
    big_store                                1.51T   150G     35    166  3.03M  1.31M
      c5t600508B40006E5850000B000012F0000d0  98.4G  1.11G      2     10   233K  67.6K
      c5t600508B40006E5850000B00001340000d0  98.1G  1.40G      3     11   244K  72.7K
      c5t600508B40006E5850000B00001370000d0  98.0G  1.52G      3      9   243K  57.9K
      c5t600508B40006E5850000B000013A0000d0  98.0G  1.52G      3     12   256K  72.6K
      c5t600508B40006E5850000B000013D0000d0  98.0G  1.48G      3     14   268K  79.6K
      c5t600508B40006E5850000B00001400000d0  98.0G  1.51G      2     13   234K  80.0K
      c5t600508B40006E5850000B00001430000d0  98.1G  1.44G      2     11   227K  78.2K
      c5t600508B40006E5850000B00001460000d0  96.9G  2.63G      2     10   198K  80.1K
      c5t600508B40006E5850000B00001490000d0  97.6G  1.95G      2     10   225K   104K
      c5t600508B40006E5850000B000014C0000d0  97.1G  2.35G      2     10   205K  82.5K
      c5t600508B40006E5850000B000014F0000d0  97.5G  2.02G      2     12   191K  98.0K
      c5t600508B40006E5850000B00001520000d0  96.2G  3.26G      2     12   209K   104K
      c5t600508B40006E5850000B00001550000d0  95.8G  3.72G      2     14   232K   151K
      c5t600508B40006E5850000B00001580000d0  92.8G  6.74G      2     14   236K   134K
      c5t600508B40006E5850000B000015B0000d0  93.6G  5.89G      2     12   229K   123K
      c5t600508B40006E5850000B00004C30000d0  62.5G  37.0G      1     11   208K   168K
      c5t600508B40006E5850000B000052D0000d0  25.2G  74.3G      1     14   125K   175K
    syspool      854M  67.2G      0      0    201     29
      mirror     854M  67.2G      0      0    201     29
        c1t2d0      -      -      0      0    125     29
        c1t3d0      -      -      0      0    127     29
    ----------  -----  -----  -----  -----  -----  -----Also, you can run zfs iostat -c 10 to identify which vdevs are heavly used.
    Cheers
    Andreas

  • Crashing and poor performance during playback of a large project.

    Hi,
    I've been a regular user of iMovie for about 3 years and have edited several 50GB+ projects of DV quality footage without too many major issues with lag or 'dropped frames'. I currently have a 80GB project that resides on a 95% full 320GB Firewire 400 external drive that has been getting very slow to open and near impossible to work with.
    Pair the bursting-at-the-seams external drive, with an overburdened 90% full internal drive - the poor performance wasn't to be unexpected. So I bought a 1TB Firewire 400 drive to free up some space on my Mac. My large iTunes library (150GB) was the main culprit and it was quickly moved onto the new drive.
    The iMovie project was then moved onto my Mac's movie folder - I figured that the project needs some "room" to work (not that I really understand how Macs use memory) and that having roughly 80GB free with 1.5GB RAM (which is more than used to have) would make everything just that much smoother.
    Wrong.
    The project opened in roughly the same amount of time - but when I tried to play back the timeline, it plays like rubbish and then after 10-15 secs the Mac goes into 'sleep' mode. The screen goes off, the fans dies down and the 'heartbeat' light goes on. A click of the mouse 'wakes' the Mac only to find that if i try again, I get the same result.
    I've probably got so many variables going on here that it's probably hard to suggest what the problem might be but all I could think of was repairing permissions (which I did and none needed it).
    Stuck on this. Anyone have any advice?

    I understand completely, having worked with a 100 GB project once. I found that getting a movie bloated up to that size was just more difficult with jerky playback.
    I do have a couple of suggestions for you.
    You may need more than that 80GB free space for this movie. Is there any reason you cannot move it to the 1TB drive? If you have only your iTunes on it, you should have about 800 GB free.
    If you still need to have the project on your computer's drive, set your computer to never sleep.
    How close to finishing editing are you with this movie? If you are nearly done except for adding audio clips, you can export (share) it as QuickTime Full Quality movie. The resultant quicktime version of your iMovie will be smaller because it will contain only the clips actually used in the movie, not all the saved whole clips that iMovie keeps as its nondestructive editing feature. The quicktime movie will be one continuous clip, incorporating all your edits and added audio. It CAN be further edited, but you cannot change text of titles already there, or change transitions or remove already added audios.
    I actually do this with nearly every iMovie. I create my movies by first importing videos, then adding still photos, then editing with titles, effects and transitions. I add audio last, and if it becomes too distorted in playback, I export the movie and then continue adding audio clips.
    My 100+ GB movie slimmed down to only 8 GB with this method. (The large size was due to having so many clips. The movie was from VHS footage of my son's little league all-star game, and the video had so many skipped segments that I had to split it into thousands of clips to remove the dropped ones. Very old VHS tape!).
    I haven't upgraded to QT 7.5.5, but I heard that the jerky playback issue is mostly resolved with this new upgrade. I am in mid-project with about 5 iMovies, so I will probably plod along with my work-around method, not wanting to upgrade in the middle of any of them.
    Hope this is helpful to you.

  • Performance during joining large tables

    Hi,
    I have to maintain a report which gets data from many large tables as below. Currently it is using join statement to join all 8 tables and causing a very slow performance.
    SELECT
        into corresponding fields of table equip
        FROM caufv
                  join afih on afih~aufnr = caufv~aufnr
                  join iloa on iloa~iloan = afih~iloan
                  join iflos  on iflos~tplnr = iloa~tplnr
                  join iflotx on iflos~tplnr = iflotx~tplnr
                  join vbak on vbak~aufnr = caufv~aufnr
                  join equz on equz~equnr = afih~equnr
                  join equi on equi~equnr = equz~equnr
                  join vbap on vbak~vbeln = vbap~vbeln
        WHERE
    Please suggest me another way, I'm newbie in ABAP. I tried using FOR ALL ENTRIES IN but it did not work. I would very appreciate if you can leave me some sample lines of code.
    Thanks,

    Hi Dear ,
    I will suggest you not to use inner join for such i.e. 8 number of table and that too huge tables. Instead use For All entries wherever possible. But before using for all entries check initial for base table and if its not possible to avoid inner join then try to minimise it. Use inner join between header and item.
    Hope this will help you to solve your problem . Feel free to ask if you have any doubt.
    Regards,
    Vijay

  • Performance during Unicode Conversion

    Hello,
    currently we are planning to do a Unicode Conversion on a Linux/MaxDB System.
    The DB has a size of approx. 350 GB. The system runs on Fujitsu-Siemens FlexFrame hardware.
    One problem is that during a test conversion the export of the DB took more than 24 hours.
    This seems too much regarding the DB size.
    We know that filesystem and RAM and CPUs influence the export duration.
    What possibilities do we have to tune the export with "software" tools?
    - table splitting
    For example: Our largest table has ~26GB datasize.
    How do we split this table?
    In 10 parts or in 20 or in 200?
    If you could please post your experiences.
    Regards, Michael

    I don´t wanna discourage you but I´m not sure if splitting that single table will help alot.
    The problem with this combination is the fact, that the MaxDB database reads data block-by-block, means, exeucting single I/Os for each block to read. Given that fact and the fact, that each I/O will be ran over an NFS (with the known latency) it´s unlikely that you will get much more speed out of this.
    The problem is NOT the speed between the NAS and the database server, if you will look on the network traffic you will see, that there´s plenty left. The problem is the latency on the client side (the database server) when you put your files on NFS.
    I can export a 250 GB database in under 3 hours (admittedly - without UC conversion) using a direct fibrechannel connection (so non-NFS) so the database itself is certainly not the problem, it´s the combination of the usage of NFS (no matter if v2 or v3) and the way the MaxDB database reads data.
    I´ve been involved in a likewise project and with a NetApp specialist and some in-depth tuning we were able to gain 15 - 20 % speed but not more.
    Markus

Maybe you are looking for

  • When using Google to search the results are displayed so small I cannot read them. This just started today.

    Today when I was searching with Google. The display of the results started getting smaller and smaller, right before my eyes, and I could not stop it, and I was doing nothing, but reading the screen. I cannot find anything to tell me how to get the l

  • Use of 0CRMVERSION in CRM BPS integarion planning

    Hi friends, If anybody used the 0CRMVERSION in their BPS CRM integration planning project.  what is the use of 0CRMVERSION in the BPS for CRM Coupon planning. Please share your ideas on this. Thanks Best regards SG

  • Appleworks wont open

    I have an iBook G4, have had it for about a year and a half and all of the sudden Appleworks wont open, I try and a message pops up saying it unexpectedly quit even though it was never open. I removed some file from my recent files folder because som

  • Microsoft Outlook 2013 Credentials Prompt Autofilled Username

    I'm having some questions about the autofilled username when Outlook prompts for password. Current account is AD account is linked to Office365 and SSO is enabled so since username now is UPN which is different from email address. Sometimes Outlook w

  • Internet explorer available for i mac ?

    As much as I hate using internet explorer,I need it to run some business related apps. Can I still install it on my iMac ? It used to be available but seems to have disapeared about a year or so ago help