Large number of small order

Hi All,
How the   process  "splitting a large purchase order  into smaller order based on certain criteria and processed together such that one order block ,blocks all the order" handled efficiently in SAP
Would appreciate your inputs on the above.
Thank you.
Abhay.

Hi ,
The scenario is that:
1.Customer sends a big purchase order.
2.This  purchase order is then split up into smaller order based on certain criteria and processed together.
3.These orders are then shipped together(This is achieved by customized developments at various order processing stages).
Important points to note: If any of the processing step fails for any of the orders, the further processing of all the orders should stop.
This is affecting the system performance to greater extent.(due to large volume of orders)
How can the system efficiency increased by existing way or other way.
Thank you
Abhay.

Similar Messages

  • Create a large number of purchase order in ME21N

    Hello,
    Is there a CATT or LSMW transaction or a program to create a large number of purchase orders with the positions?
    Thanks in advice
    Fany

    you can LSMW with direct input method
    Object - 0085, method - 0001
    venkat

  • Animating Large Number of Small Objects in CS3

    A client originally asked me to animate a single butterfly in a seven-second shot.
    Now he wants lots and lots of butterflies flying in the scene, like a mini butterfly swarm.
    Is there an easy way to take the butterfly that I've already created and put lots of them into the scene, without making the process too laborious? I don't want to take the time to import a hundred butterflies in the comp and specify a motion path for each one.
    Is there a tutorial available anywhere for this type of procedure?
    thx

    >So this is something that can't be done in AE without buying plug-ins?
    Not at all. Just a third party plugin happens to be the best and most flexible tool for the job (assuming you don't want to go to a 3D program for this effect). You can get a bunch of butterflies onscreen using Foam, but your only choice is for them to flap their wings in unison, whereas Particular will allow you to sample the time of each butterfly individually, even doing subframe sampling. Also, the effect will be strictly 2D, where Particular's particles inhabit 3D space (though the particles themselves are still 2D images). CC Particle World can do 3D particle simulations, but is pretty restrictive as to how the particles move, and while its particles have 3D orientation, this can ironically make them look more fake, since what you get is a flat 2D image rotating in 3D space.
    And Particle Playground, while flexible in a lot of ways, is just a world of hurt to actually use. It is very slow, it can't do some pretty basic things one expects a particle system to do, and it requires elaborate layer maps to control parameters that just require tweaking a couple of numerical properties in other particle systems. And it's limited to 2D. I'm pretty sure it's only still included in AE for the sake of backward-compatibility.
    And no, Particular isn't slow, it's actually quite speedy as particle systems go (at least in my experience).

  • How to add a large number of keywords to the e-mail filter?

    Hello.
    I would like to know how to add a large number of keywords to a filter.
    The thing I want to accomplish, is it to add around 4000 e-mail addresses to a filter list, which checks the body of incoming e-mails, which get forwarded to me.
    I don't want to outright delete them, but I would love it if it detects that the forwarded message contains one of the e-mail addresses, it would add a tag to the message.
    Is it in any way possible to make a filter like this, which doesn't slow Thunderbird down to ass-crawl speed?
    I tried to copy the whole list into the small filter tab, but It had no discernible effect on my messages, since some of the previously received ones, which I was sure contained the keywords, were not tagged. All it did was make the program super slow and I was forced to delete the filter.

    You can look at creating a exclusion crawl rule:
    http://technet.microsoft.com/en-us/library/jj219686(v=office.15).aspx
    You can also modify your content source starting addresses and remove onedrive:
    http://technet.microsoft.com/en-us/library/jj219808(v=office.15).aspx
    Blog | SharePoint Field Notes Dev Tools |
    SPFastDeploy | SPRemoteAPIExplorer

  • How do I create new versions of a large number of images, and place them in a new location?

    Hello!
    I have been using Aperture for years, and have just one small problem.  There have been many times where I want to have multiple versions of a large number of images.  I like to do a color album and B&W album for example.
    Previously, I would click on all the images at one, and select new version.  The problem is this puts all of the new versions in a stack.  I then have to open all the stacks, and one by one move the new versions to a different album.  Is there any way to streamline this proccess?  When it's only 10 images, it's no problem.  When it's a few hundred (or more) its rather time consuming..
    What I'm hoping for is a way to either automatically have new versions populate a separate album, or for a way to easily select all the new versions I create at one time, and simply move them with as few steps as possible to a new destination.
    Thanks for any help,
    Ricardo

    Ricardo,
    in addition to Kirby's and phosgraphis's excellent suggestions, you may want to use the filters to further restrict your versions to the ones you want to access.
    For example, you mentioned
      I like to do a color album and B&W album for example.
    You could easily separate the color versions from the black-and-white versions by using the filter rule:
    Adjustment includes Black&white
         or
    Adjustment does not include Black&white
    With the above filter setting (Add rule > Adjustment includes Black&White) only the versions with Black&White adjustment are shown in the Browers. You could do similar to separate cropped versions from uncropped ones.
    Regards
    Léonie

  • Communicate large number of parameters and variables between Verstand and Labview Model

    We have a dyno setup with a PXI-E chassis running Veristand 2014 and Inertia 2014. In order to enhance capabilities and timing of Veristand, I would like to use Labview models to perform tasks not possible by Veristand and Inertia. An example of this is to determine the maximum of a large number of thermocouples. Veristand has a compare funtion, but it compares only two values at a time. This makes for some lengthy and inflexible programming. Labview, on the other hand, has a function which aloows one to get the maximum of elements in an array in a single step. To use Labview I need to "send" the 50 or so thermocouples to the Labview model. In addition to the variables which need to be communicated between Veristand and Labview, I also need to present Labview with the threshold and confguration parameters. From the forums and user manuaIs understand that one has to use the connector pane in Labview and mapping in Veristand System Explorer to expose the inports and outports. The problem is that the Labview connector pane is limited to 27 I/O. How do I overcome that limitation?
    BTW. I am fairly new to Labview and Versitand.
    Thank you.
    Richard
    Solved!
    Go to Solution.

    @Jarrod:
    Thank you for the help. I created a simple test model and now understand how I can use clusters for a large number of variables. Regarding the mapping process: Can one map a folder of user channels to a cluster (one-step mapping)? Alternatively, I understand one can import a mapping (text) file in System Explorer. Is this import partial or does it replace all the mapping? The reason I am asking is that, if it is partial, then I can have separate mapping files for different configurations and my final mapping can be a combination of imported mapping files.
    @SteveK:
    Thank you for the hint on using a Custom Device. I understand that the Custom Device will be much more powerful and can be more generic. The problem at this stage is that my limitations in programming in Labview is far gretater than Labview models' limitations in Veristand. I'll definitely consider the Custom Device route once I am more provicient with LabView. Hopefully I'll be able to re-use some of the VI's I created for the LabView models.
    Thanks
    Richard

  • Large number of JSP performance

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.

    Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • Hello , FMS is how to prevent the client into a large number of bytes?

    Hello , FMS how to prevent the client to pass a large number of bytes , such as one person put a 1G file in the argument , I also silly to receive ?Although there Client.setBandwidthLimit ( ) limit his maximum traffic per second , but is there a way , one more than the maximum amount of bytes to disconnect his.I assume that methods to determine the length is also obtained all of his transfer is finished , in order to determine out of it .

    How to limit the size of the parameters of the method.I wrote a method in the main.asc then the client NetConnection.call assignment, but if the client is malicious to upload very large data, how to limit it, I view the document did not find the clues, I hope that those parameters up to100KB.

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • Large number of pictures have disappeared after backing up my iPhone 5s

    I used a new iCloud Back Up (Which I have created today) on my new iPhone 5s in order to setup it! everything went good, apps, contacts.. everything. But a large number of pictures have not been backed up I don't know, I can tell you that the last pictures on my iPhone now is actually from June, 14 I really need to get my photos back, please help me !!
    Using iPhone 5s, iOS 8.1.1

    No one will help me ?

  • Passing a large number of column values to an Oracle insert procedure

    I am quite new to the ODP space, so can someone please tell me what's the best and most efficient way to pass a large number of column values to an Oracle procedure from my C# program ? Passing a small number of values as parameters seem OK but when there are many, this seems inelegant.

    Passing a small number
    of values as parameters seem OK but when there are
    many, this seems inelegant.Is it possible that your table with a staggering amount of columns or method that collapses without so many inputs is ultimately what is inelegant?
    I once did a database conversion from VAX RMS system with a "table" with 11,000 columns to a normalized schema in an Oracle database. That was inelegant.
    Michael O
    http://blog.crisatunity.com

  • Fastest way to handle and store a large number of posts in a very short time?

    I need to handle a very large number of HTTP posts in a very short period of time. The handling will consist of nothing more than storing the data posted and returning a redirect. The data will be quite small (email, postal code). I don't know exactly how
    many posts, but somewhere between 50,000 and 500,000 over the course of a minute.
    My plan is to use the traffic manager to distribute the load across several data centers, and to have a website scaled to 10-instances per data center. For storage, I thought that Azure table storage would be the ideal way to handle this, but I'm not sure
    if the latency would prevent my app from handling this much data.
    Has anyone done anything similar to this and have a suggestion for storing the data? Perhaps buffering everything into memory would be ideal and then batching from there to table storage. I'm starting to load-test the direct to table-storage solution and
    am not encouraged.

    You are talking about a website with 500,000 posts per minute with re-direction, so you are talking about designing a system that can handle at least 500,000 users? Assuming that not all users are doing posts within a one minute timeframe, then you
    are talking about designing a system that can handle millions of users at any one time.
    Event hub architecture is completely different from the HTTP post architecture, every device/user/session writes directly to the hub. I was just wondering if that actually work better for you in your situation.
    Frank
    The site has no session or page displaying. It literally will record a few form values posted from another site and issue a redirect back to that originating site. It is purely for data collection. I'll see if it is possible to write directly to the event hub/service
    bus system from a web page. If so, that might work well.

  • How to design Storage Spaces with a large number of drives

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 

    I am wondering how one might go about designing a storage space for a large number of drives. Specifically I've got 45 x 4TB drives. As i am not extremely familiar with storage spaces, i'm a bit confused as to how I should go about designing this. Here is
    how i would do it in hardware raid and i'd like to know how to best match this setup in Storage Spaces. I've been burned twice now by poorly designed storage spaces and i don't want to get burned again. I want to make sure if a drive fails, i'm able to properly
    replace it without SS tossing it's cookies. 
    In the hardware raid world, i would divide these 45 x 4TB drives into three separate 15 disk Raid 6's. (Thus losing 6 drives to parity) Each raid 6 would show up as a separate volume/drive to the parent
    OS. If any disk failed in any of the three raids, i would simply pull it out and put a new disk back in and the raid would rebuild itself. 
    Here is my best guess for storage spaces. I would create 3 separate storage pools each containing 15 disks. I would then create a separate
    Dual Parity Virtual Disk for each pool. (Also losing 6 drives to parity) Each virtual disk would appear as a separate volume/disk
    to the parent OS. Did i miss anything? 
    Additionally, is there any benefit to breaking up my 45 disks into 3 separate pools? Would it be better to create one giant pool with all 45 disks and then create 3 (or however many) virtual disks on top of that one pool? 
    1) Try to avoid parity and esp. double parity RAIDs with a typical VM workload. It's dominated by small reads (OK) and small writes (not OK as whole parity stripe gets updated with any "ready-modify-write" sequence). As a result writes would be DOG slow.
    Another nasty parity RAID characteristic is very long rebuild times... It's pretty easy to get second (third with double parity) drive failure during re-build process and that would render the whole RAID set useless. Solution would be to use RAID10. Much safer,
    faster to work and rebuild compared to RAID5/6 but wastes half of raw capacity...
    2) Creating "islands" of storage is an extremely effective way of stealing IOPS away from your config. Typical modern RAID set would run out of IOPS long before running out of capacity so unless you're planning to have a file dump of an ice cold data or
    CCTV storage you'll absolutely need all IOPS from all spindles @ the same time. This again means One Big RAID10, OBR10.
    Hope this helped a bit :) Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Analyze table after insert a large number of records?

    For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
    For example:
    Insert into foo ...... //Insert one million records to table foo.
    analyze table foo COMPUTE STATISTICS; //analyze table foo
    select * from foo, bar, car...... //Execute a complex query whithout hints
    //after 1 million records inserted into foo
    Does this strategy help to improve the overall performance?
    Thanks.

    Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
    This is why you shouldn't test an application with a small number of 'fact' records.
    This can happen both with analyze table and dbms_stats.
    The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
    You can even overrule individual table and column statistics by artificial values.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for

  • Can't play any music or videos after 7.4.3.1 upgrade

    Any body seen this type of problem after the update?? After upgrading to iTunes 7.4.3.1 I can't play any music on my PC. iTunes allows me to select the tune piece from the library, loads the player and just sits there not advancing the song. If I try

  • Help with Spelling checkers

    Hi I am stuck in the mud. Using Iweb, APerture and such I get the beach ball and a alert saying "couldn't connect to spell checker" his si popping up on text editor, Aperture and other programs where spell chek is presnt. I set permisions and all tha

  • MD04 - exception message 26

    Hi All, In MD04 output for a material, i see an exception message 26 & in the MRP element column i see Projct. I would like to know what is this exception message & what might be the reason i am seeing it? For this material i see a order reservation

  • Integration Process Functions

    Hello all, Some functions  withing the Graphical Definition of an Integration Process are not clear to me. Can some explain what the following functions do: - Switch - Control - Block - Fork Hope someone can help me out. Thnx!

  • I need to restore my 3Gs to a previous date not reported on the scroll menu : how can I do it ?

    I need to restore my iPhone 3GS to e previous date than that reported on the scroll menu : how can i do that ? I lost all my contacts because , due to a my mistake , I sincronized iPhone with the empty Outlook inside of my Windows cmptr . Is there an