Portion of buffer

Hi,
I am sending spool request related output to a file using the function module RSPO_RETURN_ABAP_SPOOLJOB.
is there any function module to select only some portion of this buffer/output?
for example: the total output contains 5000 pages.
but I want to select only 1500 pages -> transfer to file
again select from 1501 page to 3000 pages -> transfer to file.
this is bacause, in production system the number of pages will be millions, which is causing overflow error.
Please help me

I've seen this with properly sized servers with very little Exchange load running. It could be a  number of different things.  Here are some items to check:
Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
Confirm that the Windows OS is running the recommended hotfixes.  Here is an older post that might still apply to you
http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
http://support.microsoft.com/kb/2699780/en-us
Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more.  Use the PAL tool to collect and analyze the perf data -
http://pal.codeplex.com/
Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
Be sure that the disk are properly aligned -
http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
Check that the network is properly configured for Exchange server.  You might be surprise how the network config can cause perf & scom alerts.
Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
Be sure that hyperthreading is NOT enabled -
http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
Check that there are no hardware issues on the server (RAM, CPU, etc).  You might need to run some vendor specific utilities/tools to validate.
Proper paging file configuration should be considered for Exchange servers.  You can use the perfmon to see just how much paging is occurring.
These will usually lead you in the right direction. Good Luck!

Similar Messages

  • Injecting DTMF event in the midst of RTP media streaming?

    Hi all, I am attempting to implement a mechanism that allows me to inject DTMF RTP events in an RTP media stream. This is useful in telephony applications where the users are prompted to enter digits while being served by automated voice services such as answerinng services or tele-bankings etc. So basically my approach is to extend Sun's RTP packetizer and intercept outgoing packets when necessary and replace them with the appropriate DTMF RTP (RFC 2833) packets. Sounds simple enough. So I derived my custom packetizer from com.sun.media.codec.audio.ulaw.Packetizer, and over-ridden method process(). Normally my packetizer's method process() simply delegates the functionality to its parent class's process() method. When a DTMF digit is required I'd take the outputbuffer generated by the parent's process() and modify its header and payload to turn it into a DTMF RTP packet. The problem is that the class RTPHeader is so limited, there is no way to set the payload type, sequence number, ... And the documentation is precious few and far in between. If someone has solved this issue, or if you have a reference to some documentation that describes the inner working of JMF's codec chaining, I would appreciate some pointers. What I need to know is:
    - What JMF does with the Buffer objects between stages (from one codec to the next)?
    - The data portion of Buffer is an Object of arbitray class, what on earth does JMF do with that?
    - How does JMF take a Buffer object and turn it into a UDP packet?
    - How do I go about creating an RTP header and fill it with the information (payload type, timestamp, sequence number, ...) that I want?
    Thanks in advance,

    I don't know what the heck RTPHeader represents but it sure doesn't seem to conform to RFC1889. And also, it seems the UDP RTP packets are formed somewhere after the packetizer and before the RTP connector, someone oughta jot all this down in a book. So anyway, using my own RTPConnector implementation I have some control over the outgoing/incoming RTP packets, to access and manipulate the real RTP header (not RTPHeader), I devised a new class that takes the RTP packet buffer and provides an API to examine and manipulate some RTP info directly on the buffer without doing any unecessary data copy (you guys can extend it to do more as per your requirement):
    package com.mycompany.media;
    import javax.media.rtp.RTPConnector;
    import java.net.DatagramSocket;
    import java.net.InetAddress;
    import java.io.IOException;
    import javax.media.protocol.ContentDescriptor;
    import javax.media.protocol.SourceTransferHandler;
    import java.net.DatagramPacket;
    import javax.media.protocol.PushSourceStream;
    import javax.media.rtp.OutputDataStream;
    import java.net.SocketException;
    import mitel.utilities.MiQueue;
    import java.nio.ByteBuffer;
    import java.util.LinkedList;
    class MiRtpHeader
         byte[] data;
         int myoffset;
         public MiRtpHeader(byte[] buf, int offset, int len) throws ArrayIndexOutOfBoundsException
              if(len < 12)
                   throw new ArrayIndexOutOfBoundsException("Buffer not large enough to contain a basic RTP header");
              data = buf;
              myoffset = offset;
         public boolean getExtension()
              return (0 != (data[myoffset] & 0x10));
         public void setExtension(boolean state)
              if(state)
                   data[myoffset] = (byte)(data[myoffset] | 0x10);
              else
                   data[myoffset] = (byte)(data[myoffset] & 0xef);
         public boolean getMarker()
              return (0 != (data[myoffset + 1] & 0x80));
         public void setMarker(boolean state)
              if(state)
                   data[myoffset + 1] = (byte)(data[myoffset + 1] | 0x80);
              else
                   data[myoffset + 1] = (byte)(data[myoffset + 1] & 0x7f);
         public int getTs()
              ByteBuffer tsBuf = ByteBuffer.wrap(data, myoffset + 4, 4);
              return tsBuf.getInt();
         public void setTs(int ts)
              ByteBuffer tsBuf = ByteBuffer.wrap(data, myoffset + 4, 4);
              tsBuf.putInt(ts);
         public int getPayloadType()
              ByteBuffer ptBuf = ByteBuffer.wrap(data, myoffset + 1, 1);
              return ptBuf.get();
    }

  • ESE - Event Log Warning: 906 - A significant portion of the database buffer cache has been written out to the system paging file...

    Hello -
    We have 3 x EX2010 SP3 RU5 nodes in a cross-site DAG.
    Multi-role servers with 18 GB RAM [increased from 16 GB in an attempt to clear this warning without success].
    We run nightly backups on both nodes at the Primary Site.
    Node 1 backup covers all mailbox databases [active & passive].
    Node 2 backup covers the Public Folders database.
    The backups for each database are timed so they do not overlap.
    During each backup we get several of these event log warnings:
     Log Name:      Application
     Source:        ESE
     Date:          23/04/2014 00:47:22
     Event ID:      906
     Task Category: Performance
     Level:         Warning
     Keywords:      Classic
     User:          N/A
     Computer:      EX1.xxx.com
     Description:
     Information Store (5012) A significant portion of the database buffer cache has been written out to the system paging file.  This may result  in severe performance degradation.
     See help link for complete details of possible causes.
     Resident cache has fallen by 42523 buffers (or 27%) in the last 903 seconds.
     Current Total Percent Resident: 26% (110122 of 421303 buffers)
    We've rescheduled the backups and the warning message occurences just move with the backup schedules.
    We're not aware of perceived end-user performance degradation, overnight backups in this time zone coincide with the business day for mailbox users in SEA.
    I raised a call with the Microsoft Enterprise Support folks, they had a look at BPA output and from their diagnostics tool. We have enough RAM and no major issues detected.
    They suggested McAfee AV could be the root of our problems, but we have v8.8 with EX2010 exceptions configured.
    Backup software is Asigra V12.2 with latest hotfixes.
    We're trying to clear up these warnings as they're throwing SCOM alerts and making a mess of availability reporting.
    Any suggestions please?
    Thanks in advance

    Having said all that, a colleague has suggested we just limit the amount of RAM available for the EX2010 DB cache
    Then it won't have to start releasing RAM when the backup runs, and won't throw SCOM alerts
    This attribute should do it...
    msExchESEParamCacheSizeMax
    http://technet.microsoft.com/en-us/library/ee832793.aspx
    Give me a shout if this is a bad idea
    Thanks

  • A significant portion of the database buffer cache has been written out to the system paging file.

    Hi,
    We seem to get this error through SCOM every couple of weeks.  It doesn't correlate with the AV updates, so I'm not sure what's eating up the memory.  The server has been patched to the latest roll up and service pack.  The mailbox servers
    have been provisioned sufficiently with more than enough memory.  Currently they just slow down until the databases activate on another mailbox server.
    A significant portion of the database buffer cache has been written out to the system paging file.
    Any ideas?

    I've seen this with properly sized servers with very little Exchange load running. It could be a  number of different things.  Here are some items to check:
    Confirm that the server hardware has the latest BIOS, drivers, firmware, etc
    Confirm that the Windows OS is running the recommended hotfixes.  Here is an older post that might still apply to you
    http://blogs.technet.com/b/dblanch/archive/2012/02/27/a-few-hotfixes-to-consider.aspx
    http://support.microsoft.com/kb/2699780/en-us
    Setup a perfmon to capture data from the server. Look for disk performance, excessive paging, CPU/Processor spikes, and more.  Use the PAL tool to collect and analyze the perf data -
    http://pal.codeplex.com/
    Include looking for other applications or processes that might be consuming system resources (AV, Backup, security, etc)
    Be sure that the disk are properly aligned -
    http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
    Check that the network is properly configured for Exchange server.  You might be surprise how the network config can cause perf & scom alerts.
    Make sure that you did not (improperly) statically set msExchESEParamCacheSizeMax and msExchESEParamCacheSizeMin attributes in Active Directory -
    http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
    Be sure that hyperthreading is NOT enabled -
    http://technet.microsoft.com/en-us/library/dd346699(v=exchg.141).aspx#Hyper
    Check that there are no hardware issues on the server (RAM, CPU, etc).  You might need to run some vendor specific utilities/tools to validate.
    Proper paging file configuration should be considered for Exchange servers.  You can use the perfmon to see just how much paging is occurring.
    These will usually lead you in the right direction. Good Luck!

  • SCOM reports "A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation"

    This was discussed here, with no resolution
    http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
    I have the same issue.  This is a single-purpose physical mailbox server with 320 users and 72GB of RAM.  That should be plenty.  I've checked and there are no manual settings for the database cache.  There are no other problems with
    the server, nothing reported in the logs, except for the aforementioned error (see below).
    The server is sluggish.  A reboot will clear up the problem temporarily.  The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each.  Does anyone have
    any ideas on this?
    Warning ESE Event ID 906. 
    Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file.  This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
    has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)

    Brian,
    We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
    We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
    for the sole purpose of serving as our public folder servers.
    So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
    cache flush to paging file, we got the following alert:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:14 AM
    Event ID:      17012
    Task Category: Storage
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
       at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
    Followed by:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:15 AM
    Event ID:      17106
    Task Category: Storage
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:13:50 AM
    Event ID:      17102
    Task Category: Storage
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action.  This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
    is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
    So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
    actions.
    Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
    Thanks!

  • Constructing a buffer to hold bytes going to socket

    I'm a newbie and struggling with a project to construct a multi-threaded Web server that enables a browser-client to access an HTML file. I've constructed a 1K buffer, but if I want to examine various file transfer rates with different file sizes, I have to recode and recompile. How do I construct the buffer so that I can change the transfer rate and the file sizes without having to recode/recompile? The portion of my code relating to buffer construction is as follows:
    private static void sendBytes(FileInputStream fis, OutputStream os)
              throws Exception
              //Construct a 1K buffer to hold bytes on their way to the socket
              byte[] buffer = new byte[1024];
              int bytes = 0;
              //Copy requested file into the socket's output stream
              while((bytes = fis.read(buffer)) != -1)
                   os.write(buffer, 0, bytes);
         private static String contentType(String fileName)
              if (fileName.endsWith(".htm") || fileName.endsWith(".html"))
                   return "text/html";
              if (fileName.endsWith(".jpg"))
                   return "image/jpeg";
              if (fileName.endsWith(".gif"))
                   return "image/gif";
              return "application/octet-stream";
    }

    Use a system property. When you run your program:
    java -Dbuffer.size=4096 ...
    Then in your code:
    int bufsize = Integer.get("buffer.size", 1024).intValue();
    byte[] buffer = new byte[bufsize];

  • Intercepting image buffer data

    I'm trying to develop a program which needs some image processing to occur on the stream of data being drawn to the screen as displayed by the camera. I don't actually need to capture the video, just process portions of the screen space.
    How should I go about intercepting the data being drawn to the screen buffer?

    I'm trying to develop a program which needs some image processing to occur on the stream of data being drawn to the screen as displayed by the camera. I don't actually need to capture the video, just process portions of the screen space.
    How should I go about intercepting the data being drawn to the screen buffer?

  • What are all information brought into database buffer cache ?

    Hi,
    What are all information brought into database buffer cache , when user does any one of operations such as "insert","update", "delete" , "select" ?
    Whether the datablock to be modified only brought into cache or entire datablocks of a table brought into cache while doing operations i mentioned above ?
    What is the purpose of SQL Area? What are all information brought into SQLArea?
    Please explain me the logic behind the questions i asked above.
    thanks in advance,
    nvseenu

    Documentation is your friend. Why not start by
    reading the
    [url=http://download.oracle.com/docs/cd/B19306_01/serv
    er.102/b14220/memory.htm]Memory Architecturechapter.
    Message was edited by:
    orafad
    Hi orafad,
    I have learnt MemoryArchitecture .
    In that documentation , folowing explanation are given,
    The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles.
    But i would like to know whether all or few datablocks brought into cache.
    thanks in advance,
    nvseenu

  • Dynamic history size of xy chart buffer

    Hi all..
    My question is that when I wanna plot dynamic xy graph I should use xy chart. Till here everything is ok. But the option""history size"of xy chart buffer forces me to determine how many points of history data will be displayed. But  what i want is that not fixing it, i wanna see all measurement points till i stop the measurement. thats why i connected to history size option to loop number indicater so that after each measurement i would be able to see all the previos datas till i stop the loop. But it doesnt work!! Does my approach is wrong?

    Sima is right.
    You cannot have an unlimited history size on a computer with limited memory and resources (Yes, even gigabytes is not infinite). If course you can make your own history buffer by growing the x and y data in shift registers, but at one point during run the system will bog down due to constant memory reallocation operations. Also you cannot possible display more data that fits on a typical screen.
    Typically there is some knowledge on an upper boundary, so you can preallocate suitable buffers. If the history is bigger than fits in memory, stream to disk and add some mechansims to browse through history data by reading selected portions back.
    LabVIEW Champion . Do more with less code and in less time .

  • How to ignore DAQmx buffer errors ?

    Hi,
    In my application, I have to start the data acquisition of analog inputs (PCI 6025E)so that it could be read when the uses wishes to through a user interface VI. In addition to this, I also have a background thread that reads two of these AI ports every 250 msecs. However, by the time the buffer is read it has probably been overwritten atleast once and then it throws this error window saying buffer was overwritten etc. I think the buffer is a circular buffer and if so buffer being overwritten doesn't affect my application. So, is there a way I can avoid this error window from popping up ?
    Thanks,
    Sharmila

    There might be a couple of ways out of this. If you create a functional global (also known as a LabVIEW2 style global)you can have it written too by your DAQ portion, and read from elsewhere. You can make this LV2 global act as a circular buffer, allowing you to read from it when ever you need to, but allowing the DAQ to write to it when ever it needs to. There has been a lot of discussion on the construction of LV2 globals, so you should be able to find the information.
    Additionally, in the recent versions of LabVIEW there is an option (under the tools/option/block diagram menu pulldown) that allows enabling automatic error handling. What this does is cause an error dialog box to pop-up on any vi you use that has error handling that you haven't "handled" by wiring the error out to something else. Unchecking this may prevent the popup, I don't know whether the vi generating it will then just continue or whether it will need to have it or some earlier vi "reset". This is a useful feature, particularly in development and debugging, although I prefer to intentionally handle errors when I'm designing my code as it forces you to think about the various possible input cases that might fall outside of what you really wanted to happen.
    Putnam Monroe
    Certified LabVIEW Developer
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • Does buffer cache size matters during imp process ?

    Hi,
    sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
    As far as I know insert is done via pga area (direct insert) .
    Please clarify for me .
    DB is 10.2.0.3 if that matters :).
    Regards.
    Greg

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • Help me to reduce buffer gets of following query

    Buffer gets with this query is 460K.
    We want to reduce it drastically.
    SELECT temp21.resource_code
    ,temp21.employee_type employee_type
         ,temp21.resource_name resource_name
         ,temp21.manager_id
         ,temp21.manager_name
         ,temp21.period_start
         ,temp21.period_finish
         ,temp21.avail_hours
         ,temp21.act_hours
         ,temp21.timesheet_status timesheet_status
         ,temp21.email,temp21.ouc
         ,temp21.work_country
         ,f.level1_unit_id level1_id
         ,f.level2_unit_id level2_id
         ,f.level3_unit_id level3_id
         ,f.level4_unit_id level4_id
         ,f.level5_unit_id level5_id
         ,f.level6_unit_id level6_id
         ,f.level7_unit_id level7_id
         ,f.level8_unit_id level8_id
         ,f.level9_unit_id level9_id
         ,f.level10_unit_id level10_id
         ,f.level1_name
         ,f.level2_name
         ,f.level3_name
         ,f.level4_name
         ,f.level5_name
         ,f.level6_name
         ,f.level7_name
         ,f.level8_name
         ,f.level9_name
         ,f.level10_name
    FROM (SELECT avail.res_id
         ,avail.resource_code
              ,avail.employee_type
                        ,avail.person_type
                        ,avail.resource_name
                        ,avail.manager_id
                        ,avail.manager_name
                        ,avail.period_start
                        ,avail.period_finish
                        ,avail.avail_hours
                        ,NVL(act.act_hours,0) act_hours
                        ,act.timesheet_status
                        ,avail.prid
                        ,avail.email
                        ,avail.ouc
                        ,avail.work_country
    FROM (SELECT /*+ALL_ROWS*/r.id res_id
         ,lk.name employee_type
              ,r.unique_name resource_code
                                            ,r.person_type
                                            ,r.first_name||' '||r.last_name resource_name
                                            ,r.manager_id manager_id
                                            ,r1.first_name||' '||r1.last_name manager_name
                                            ,TRUNC(tp.prstart) period_start
                                            ,TRUNC(tp.prfinish - 1) period_finish
                                            ,NVL(o.gs_hrs_avail_week,0) avail_hours
                                            ,r.email
                                            ,o.ouc
                                            ,o.work_country
                                            ,tp.prid
                   FROM niku.ODF_CA_RESOURCE o
                   ,niku.SRM_RESOURCES r
                        ,niku.SRM_RESOURCES r1
                        ,niku.PRTIMEPERIOD tp
                        ,niku.CMN_SEC_USER_GROUPS usr_grp
                        ,niku.CMN_SEC_GROUPS grp
                        ,(SELECT
                             NLS.NAME
                             ,LKP.ID
                             FROM CMN_CAPTIONS_NLS NLS,CMN_LOOKUPS LKP
                             WHERE NLS.PK_ID=LKP.ID
                             AND LKP.LOOKUP_TYPE='SRM_RESOURCE_TYPE'
                             AND NLS.TABLE_NAME='CMN_LOOKUPS'
                             AND NLS.LANGUAGE_CODE='en') lk
    WHERE r.id=o.id
    AND r1.user_id(+)=r.manager_id
    AND r.user_id=usr_grp.user_id
    AND usr_grp.GROUP_ID=grp.id
         AND r.person_type=lk.id
    AND r.is_active = 1
    AND r1.is_active = 1
    AND grp.GROUP_CODE='gs_tb'
    AND tp.prisopen = 1
    ) avail
                   ,(SELECT r.id
                   ,tp.prid
                                            ,SUM(NVL(practsum,0)/3600) act_hours
                                            ,ts.prstatus timesheet_status
    FROM niku.SRM_RESOURCES r
                   ,niku.PRTIMESHEET ts
    SELECT /*+ALL_ROWS*/ MAX(prid) prid
                             ,prtimeperiodid
                             ,prresourceid
    FROM niku.prtimesheet
                             GROUP BY prtimeperiodid,prresourceid
    ) ts_new
                        ,niku.PRTIMEENTRY te
                        ,niku.PRTIMEPERIOD tp
                        ,niku.CMN_SEC_USER_GROUPS usr_grp
                        ,niku.CMN_SEC_GROUPS grp
    WHERE ts.prid=ts_new.prid
    AND ts.prtimeperiodid=ts_new.prtimeperiodid
    AND ts.prresourceid=ts_new.prresourceid
    AND r.id=ts.PRRESOURCEID
    AND ts.PRID=te.PRTIMESHEETID
    AND ts.PRTIMEPERIODID=tp.prid
    AND usr_grp.USER_ID=r.USER_ID
    AND grp.id=usr_grp.group_id
    AND r.is_active=1
    AND tp.PRISOPEN=1
    AND ts.prstatus not in(0,2,5)
    AND grp.group_code='gs_tb'
    AND TRUNC(tp.PRSTART) >= TRUNC(TO_DATE('7/24/2006','MM/DD/YYYY HH:MI:SS AM'))
    AND TRUNC(tp.PRFINISH-1) <= TRUNC(TO_DATE('9/25/2006','MM/DD/YYYY HH:MI:SS AM'))
              GROUP BY r.ID
                   ,tp.PRID
                             ,ts.prstatus) act
    WHERE act.id(+) = avail.res_id
    AND act.prid(+) = avail.prid
    AND (avail.avail_hours - NVL(act.act_hours,0) > 0))temp21
         ,prj_obs_associations o1
         ,nbi_dim_obs f
         WHERE 1=1
    AND temp21.prid in (SELECT prid
    FROM PRTIMEPERIOD
    WHERE TRUNC(PRSTART) >= TRUNC(TO_DATE('7/24/2006','MM/DD/YYYY HH:MI:SS AM'))
    AND TRUNC(PRFINISH-1) <= TRUNC(TO_DATE('9/25/2006','MM/DD/YYYY HH:MI:SS AM'))
    AND
    temp21.res_id = o1.record_id
    AND o1.unit_id = f.obs_unit_id
    AND o1.table_name = 'SRM_RESOURCES'
    AND f.obs_type_id = 5000009
    AND f.level5_unit_id = 5013334
                                       ORDER BY temp21.manager_name
                                       ,temp21.manager_id
                                       ,temp21.resource_name
                                       ,temp21.resource_code
                                       ,temp21.period_start
                                       ,temp21.period_finish
                                       ,temp21.timesheet_status

    ...Also
    AND TRUNC(tp.PRSTART) >= TRUNC(TO_DATE('7/24/2006','MM/DD/YYYY HH:MI:SS AM'))
    AND TRUNC(tp.PRFINISH-1) <= TRUNC(TO_DATE('9/25/2006','MM/DD/YYYY HH:MI:SS AM'))Although this won't fail, you have specified the time portion of the date without supplying data to be converted....which you don't need as it will automatically be set to midnight if you don't specify it.
    In addition, you are applying the TRUNC function to the column you are comparing which will prevent any indexes from being used. You should perform the arithmetic on the litterals, not the columns, and set up the range correctly:
    AND tp.PRSTART >= TO_DATE('7/24/2006','MM/DD/YYYY)
    AND tp.PRFINISH < TO_DATE('9/25/2006','MM/DD/YYYY) + 2This should yeild the same result and will allow any indexes to be used.
    Without a formatted execution plan and more info from you, this is just a wild guess as to what part of the problem might be.
    HTH
    David
    After a further look, the cartesian product appears to be in the bold section:
    SELECT
         temp21.resource_code
         ,temp21.employee_type employee_type
         ,temp21.resource_name resource_name
         ,temp21.manager_id
         ,temp21.manager_name
         ,temp21.period_start
         ,temp21.period_finish
         ,temp21.avail_hours
         ,temp21.act_hours
         ,temp21.timesheet_status timesheet_status
         ,temp21.email,temp21.ouc
         ,temp21.work_country
         ,f.level1_unit_id level1_id
         ,f.level2_unit_id level2_id
         ,f.level3_unit_id level3_id
         ,f.level4_unit_id level4_id
         ,f.level5_unit_id level5_id
         ,f.level6_unit_id level6_id
         ,f.level7_unit_id level7_id
         ,f.level8_unit_id level8_id
         ,f.level9_unit_id level9_id
         ,f.level10_unit_id level10_id
         ,f.level1_name
         ,f.level2_name
         ,f.level3_name
         ,f.level4_name
         ,f.level5_name
         ,f.level6_name
         ,f.level7_name
         ,f.level8_name
         ,f.level9_name
         ,f.level10_name
    FROM (     SELECT
                   avail.res_id
                   ,avail.resource_code
                   ,avail.employee_type
                   ,avail.person_type
                   ,avail.resource_name
                   ,avail.manager_id
                   ,avail.manager_name
                   ,avail.period_start
                   ,avail.period_finish
                   ,avail.avail_hours
                   ,NVL(act.act_hours,0) act_hours
                   ,act.timesheet_status
                   ,avail.prid
                   ,avail.email
                   ,avail.ouc
                   ,avail.work_country
              FROM (     SELECT /*+ALL_ROWS*/
                             r.id res_id
                             ,lk.name employee_type
                             ,r.unique_name resource_code
                             ,r.person_type
                             ,r.first_name||' '||r.last_name resource_name
                             ,r.manager_id manager_id
                             ,r1.first_name||' '||r1.last_name manager_name
                             ,TRUNC(tp.prstart) period_start
                             ,TRUNC(tp.prfinish - 1) period_finish
                             ,NVL(o.gs_hrs_avail_week,0) avail_hours
                             ,r.email
                             ,o.ouc
                             ,o.work_country
                             ,tp.prid
                        FROM
                             niku.ODF_CA_RESOURCE o
                             ,niku.SRM_RESOURCES r
                             ,niku.SRM_RESOURCES r1
                             ,niku.PRTIMEPERIOD tp
                             ,niku.CMN_SEC_USER_GROUPS usr_grp
                             ,niku.CMN_SEC_GROUPS grp
                             ,(     SELECT
                                       NLS.NAME
                                       ,LKP.ID
                                  FROM
                                       CMN_CAPTIONS_NLS NLS,
                                       CMN_LOOKUPS LKP
                                  WHERE
                                       NLS.PK_ID=LKP.ID
                                  AND LKP.LOOKUP_TYPE='SRM_RESOURCE_TYPE'
                                  AND NLS.TABLE_NAME='CMN_LOOKUPS'
                                  AND NLS.LANGUAGE_CODE='en'
                             ) lk
                        WHERE r.id=o.id
                        AND r1.user_id(+)=r.manager_id
                        AND r.user_id=usr_grp.user_id
                        AND usr_grp.GROUP_ID=grp.id
                        AND r.person_type=lk.id
                        AND r.is_active = 1
                        AND r1.is_active = 1
                        AND grp.GROUP_CODE='gs_tb'
                        AND tp.prisopen = 1
                   ) avail
                   ,(     SELECT
                             r.id
                             ,tp.prid
                             ,SUM(NVL(practsum,0)/3600) act_hours
                             ,ts.prstatus timesheet_status
                        FROM
                             niku.SRM_RESOURCES r
                             ,niku.PRTIMESHEET ts
                             ,(      SELECT /*+ALL_ROWS*/
                                       MAX(prid) prid
                                       ,prtimeperiodid
                                       ,prresourceid
                                  FROM
                                       niku.prtimesheet
                                  GROUP BY
                                       prtimeperiodid,
                                       prresourceid
                             ) ts_new
                             ,niku.PRTIMEENTRY te
                             ,niku.PRTIMEPERIOD tp
                             ,niku.CMN_SEC_USER_GROUPS usr_grp
                             ,niku.CMN_SEC_GROUPS grp
                        WHERE
                             ts.prid=ts_new.prid
                        AND ts.prtimeperiodid=ts_new.prtimeperiodid
                        AND ts.prresourceid=ts_new.prresourceid
                        AND r.id=ts.PRRESOURCEID
                        AND ts.PRID=te.PRTIMESHEETID
                        AND ts.PRTIMEPERIODID=tp.prid
                        AND usr_grp.USER_ID=r.USER_ID
                        AND grp.id=usr_grp.group_id
                        AND r.is_active=1
                        AND tp.PRISOPEN=1
                        AND ts.prstatus not in(0,2,5)
                        AND grp.group_code='gs_tb'
                        AND TRUNC(tp.PRSTART) >= TRUNC(TO_DATE('7/24/2006','MM/DD/YYYY HH:MI:SS AM'))
                        AND TRUNC(tp.PRFINISH-1) <= TRUNC(TO_DATE('9/25/2006','MM/DD/YYYY HH:MI:SS AM'))
                        GROUP BY r.ID
                        ,tp.PRID
                             ,ts.prstatus
                   ) act
              WHERE act.id(+) = avail.res_id
              AND act.prid(+) = avail.prid
              AND (avail.avail_hours - NVL(act.act_hours,0) > 0)
         )temp21
         ,prj_obs_associations o1
         ,nbi_dim_obs f
    WHERE 1=1
         AND temp21.prid in (     SELECT prid
                                       FROM PRTIMEPERIOD
                                       WHERE TRUNC(PRSTART) >= TRUNC(TO_DATE('7/24/2006','MM/DD/YYYY HH:MI:SS AM'))
                                       AND TRUNC(PRFINISH-1) <= TRUNC(TO_DATE('9/25/2006','MM/DD/YYYY HH:MI:SS AM'))
    AND
         temp21.res_id = o1.record_id
         AND o1.unit_id = f.obs_unit_id
         AND o1.table_name = 'SRM_RESOURCES'
         AND f.obs_type_id = 5000009
         AND f.level5_unit_id = 5013334
    ORDER BY temp21.manager_name
    ,temp21.manager_id
    ,temp21.resource_name
    ,temp21.resource_code
    ,temp21.period_start
    ,temp21.period_finish
    ,temp21.timesheet_status Message was edited by:
    david_tyler

  • Capture a portion of FP image

    Hi all,
    I like to know how to capture a portion of front panel image? for instance, if i have 2 graphs on fp, i like to capture only one graph image as jpeg to the disk.
    Thanks

    You didn't mention the LV version you are using.
    If you have LV 7.1, then you won't see the sub-menu for the invoke node. Rather once you click on the invoke node, it would be placed on the block diagram and there, you can select the 'Get image' option.
    Also there is another option of saving the data as shown in attached image 'Option 2'. If you use the 'Copy data', it copies all the figure data in the buffer (not only the data displayed) to the clipboard. you can then save using any graphics program.
    also, If you use the 'export simplided image...' option, you can save a b&W image in either wmf or bmp format to the clipboard or a file.
    Hope this helps.
    Attachments:
    LV7.1.jpg ‏16 KB
    LV7.1-FP.jpg ‏51 KB
    LV7.1-FP-option2.jpg ‏67 KB

  • Linking axis to saved buffer in ROM after initialization with 7356

    Can you please help?
    I can't seem to re-connect an axis to a saved position buffer in ROM after initialization/reset of board.
    Details:
    When I make a position buffer and save to ROM and then repeatedly run motion command  (after using the conifgure buffer.flx with buffer size 0) the motion works just fine.  If however, after saving this buffer to ROM, I do some setting changes/initialization ect, then I can't seem to reconnect to this existing buffer in the ROM (I checked and it is there via measurement and automation explorer).    So, how do I re-connect to this buffer?
    My system specs: Using windows XP sp2 with Labview 8, and the motion card is the PCI-7356.
    Thanks very much.
    Richard Cisek
    [email protected]
    Attachments:
    Richard's question.vi ‏37 KB

    Hi Richard,
    I have attached my working VI to this reply.  I tested it, and it
    works after a reset and reinitialization of the motion controller board.
    In the Connect to Buffer and Move portion of your VI, you must
    reconfigure the Operation Mode for Absolute Contouring.  After a
    reset, the motion controller board will have a default Operation Mode
    of Absolute Position, so you must change it back to Absolute
    Contouring.  You must also input the correct number of Total
    Points to the Configure Buffer function.  The correct number of
    Total Points is the size of the array in the selected buffer.
    Make sure that the Measurement & Automation Explorer is CLOSED when
    you run this program.  If MAX is open, its Object Memory Manager
    will be making calls to Check Buffer which will conflict with this VI
    and cause errors.
    I also added Motion Error Handler VIs, so you can monitor any motion errors that might occur.
    Sorry for the delay.  Hope this helps!
    Allen H.
    Attachments:
    SaveAndLoadBufferFromROM.vi ‏101 KB

  • Perl Buffer Issue on Sun Java Web Server 7_3

    I currently migrating a website built using Perl on Apache. I was able to setup the Sun Java Web Server to run the site without any errors. During testing we noticed one small anomoly that I just can't figure out. Many of the Perl scripts run for 30+ seconds. The scripts render a portion of the response, HTML, that displays a status to the user. In Apache this displays fine, but in the Sun Java Web Server nothing is displayed until the script is complete. We have the "$| = 1;" command sprinkled throughout the scripts to properly flush the buffer, but that doesn't seem to work. Is there a config option that I'm missing that will help flush the buffer from CGI programs? Any help would be greatly appreciated.

    I made the change in my /webserver7/https-webserver.fmr.com/config/obj.conf file. Here is a complete copy of my obj.conf file. I've restarted the webserver instance and still the buffer doesn't seem to be flushing anything to the client until the perl script is done running.
    <Object name="default">
    AuthTrans fn="match-browser" browser="*MSIE*" ssl-unclean-shutdown="true"
    NameTrans fn="ntrans-j2ee" name="j2ee"
    NameTrans fn="pfx2dir" from="/mc-icons" dir="/apps/sun/webserver7/lib/icons" nam
    e="es-internal"
    PathCheck fn="uri-clean"
    PathCheck fn="check-acl" acl="default"
    PathCheck fn="find-pathinfo"
    PathCheck fn="find-index-j2ee"
    PathCheck fn="find-index" index-names="index.html,home.html,index.jsp"
    ObjectType fn="type-j2ee"
    ObjectType fn="type-by-extension"
    ObjectType fn="force-type" type="text/plain"
    Service method="(GET|HEAD)" type="magnus-internal/directory" fn="index-common"
    Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file"
    Service method="TRACE" fn="service-trace"
    Error fn="error-j2ee"
    AddLog fn="flex-log"
    </Object>
    <Object name="j2ee">
    Service fn="service-j2ee" method="*"
    </Object>
    <Object name="es-internal">
    PathCheck fn="check-acl" acl="es-internal"
    </Object>
    <Object name="cgi">
    ObjectType fn="force-type" type="magnus-internal/cgi"
    Service fn="send-cgi" bucket="cgi-bucket" UseOutputStreamSize=0
    </Object>
    <Object name="send-precompressed">
    PathCheck fn="find-compressed"
    </Object>
    <Object name="compress-on-demand">
    Output fn="insert-filter" filter="http-compression"
    </Object>

Maybe you are looking for

  • How to add a host entry in ipad

    I need to add a /etc/hosts entry on an ipad like we do in unix. Is it possible?

  • Strip multiple @domain used in username on AD Integration with Cisco ISE?

    Hi there , How to strip multiple domain suffixes from username through ISE with AD being used as external Identity Source. Username is being used in username@domain format. Cisco ISE 1.2 patch 4 introduced strip prefix or suffix @domain realm from us

  • BDC scrolling problem

    HI , I am facing problem while recording for Txn PO13. I have to enter an infotype by scrolling but not possible. Any ideas ?? Thanks, Sandip.

  • Recto verso print result very strange

    Hi !             First of all I want describe my configuration.... We are a  town with a lan  of  more than 50 printers and 200 computer. We have an domain controler and since few month  a print server on a Windows Server 2008 R2 datacenter is . On t

  • T.code AS91 going for shortdump.

    I have an Issue that Transaction Code AS91 is going for shortdump. It suggests that there is an error in the program SAPLAIST in line No. 12 of the Include Program "LAISTF3A"  i.e., -Termination Occured call function 'VIEW_SELECT' - >>> *Termination