Complex query cannot cope with shared memory

Hi All,
SELECT s.sessionid,
  s.requestid,
  s.locumid,
  s.sessiondate,
  s.sessionstart,
  s.sessionend,
  s.status,
  decode(type,   '1',   'Surgeries',   '2',   'Surgeries and Visits',   '3',   'Surgeries and On Call',   '4',   'Surgeries, On call and Visits',   '5',   'On Call',   '6',   'Visits',   '0',   'Type not listed',   type)
FROM sessions s,
  locumdetails l,
  locumrequest lr
WHERE l.locumid = s.locumid
AND lr.locumrequestid = s.requestid;
see error below:
I am presently on 9i. Do i need to increase the share memory size. and where if do or what is the problem. help
ERROR at line 1:
ORA-04031: unable to allocate 4096 bytes of shared memory ("large pool","unknown object","hash-join .
What do I do in such situation please.
cube60Message was edited by:
cube60

Thanks Ranga
Re: strange startup problem
The database in question was set as a shared and not a dedicated server. So, in trying to increase the shared pool size in OEM, it would not shut down because it is running on a shared memory service. So I had to change or add to the TNSnames.ora and .bak under that TNSname changing it to a dedicated server and it took in the new memory increase quite alright.
addedd (server = Dedicated) in code below:
EDIS =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = fulham)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = dedicated)
      (SERVICE_NAME = edis)
  )and it was fine
Thanks for the reference.
cube60
Message was edited by:
cube60

Similar Messages

  • SAPOSCOL Problems with Shared Memory

    Has anyone had any problems in either starting the saposcol or with keeping it running?  We have noticed multiple problems for several years now, and are at a point where the saposcol is not running at all on any of our servers, nor can we start it running.
    We're seeing "SAPOSCOL not running ? (Shared memory not available)".  I am working with SAP with two different customer messages trying to determine why we cannot start the saposcol.
    Does anyone have any ideas?
    Thanks,
    Traci Wakefield
    CV Industries

    I do have entries in the os-collector log:
          SAPOSCOL version  COLL 20.89 640 - AS/400 SAPOSCOL Version 18 Oct 2005, 64 bit, single threaded, Non-Unicode
          compiled at   Nov 26 2005
          systemid      324 (IBM iSeries with OS400)
          relno         6400
          patch text    COLL 20.89 640 - AS/400 SAPOSCOL Version 18 Oct 2005
          patchno       102
          intno         20020600
          running on    CENTDB1 OS400 3 5 0010000A1E1B
    13:25:06 28.02.2007   LOG: Profile          : no profile used
    13:25:06 28.02.2007   LOG: Saposcol Version  : [COLL 20.89 640 - AS/400 SAPOSCOL Version 18 Oct 2005]
    13:25:06 28.02.2007   LOG: Working directory : /usr/sap/tmp
    13:26:01 28.02.2007   LOG: Shared Memory Size: 339972.
    13:26:01 28.02.2007   LOG: INFO: size = (1 + 60 + 3143) * 106 + 348.
    13:26:01 28.02.2007   LOG: Connected to existing shared memory.
    13:26:01 28.02.2007   LOG: Reused shared memory. Clearing contents.
    13:26:04 28.02.2007   LOG: Collector daemon started
    13:26:04 28.02.2007   LOG: read coll.put Wed Feb 28 13:22:01 2007
    13:26:04 28.02.2007   LOG: Collector PID: 2469
    13:27:05 28.02.2007   LOG: Set validation records.
    14:00:32 28.02.2007 WARNING: Out of int limits in pfas41c1.c line 1528
    12:58:37 10.03.2007   LOG: Stop Signal received.
    12:58:38 10.03.2007   LOG: ==== Starting to deactivate collector ==========
    12:59:01 10.03.2007   LOG: ==== Collector deactivated  ================
    Also, I have tried saposcol -d (and other parameters below):
    saposcol -d
    kill
    clean
    leave
    quit.
    delete the files coll.put, dev_coll"
    4" From my open SAP message:
    I have also done the following:
    1 Check the authorizations of SAPOSCOL as mentioned in SAP Notes:
    637174 SAPOSCOL cannot access Libraries of different SAP systems
    175852 AS/400: Authorization problems in SAPOSCOL
    2 Remove the shared memory (coll.put)
    (according to SAP Note: #189072). You could find 'coll.put' in path:
    '/usr/sap/tmp'.
    3 End the following jobs in QSYSWRK:
    QPMASERV, QPMACLCT, QYPSPFRCOL and CRTPFRDTA
    4 Delete the temporary user space:
    WRKOBJ OBJ(R3400/PERFMISC) OBJTYPE(USRSPC)
    Afterwards you could start SAPOSCOL on operating system level.
    Just logon to iSeries as <SID>OFR and run the following command:
    SBMJOB CMD(CALL PGM(<kernel_lib>/SAPOSCOL) PARM('-l'))
    JOB(SAPOSCOL) JOBQ(R3<SID>400/R3_<nn>)
    LOG(4 0 SECLVL) CPYENVVAR(YES)
    Thanks,
    Traci"

  • Acrobat/Reader cannot cope with internal Webserver connectivity problems, leading to comments being hidden/deleted. Any thoughts?

    Adobe Acrobat/Reader do not cope with interruptions to access to the internal WebServer hosting the PDF/XML review files.
    Steps to reproduce the issue:
    Connection is lost to the WebServer hosting the PDF review files.
    After connection to the WebServer is regained, access to the PDF review files is blocked by Adobe Acrobat/Reader.
    Sometimes this can be resolved by deleting the entire contents of the folder:
              C:\Users\<user-name>\AppData\LocalLow\Adobe\Acrobat\11.0\Synchronizer
    NOTES:
    We have experienced this behaviour especially when connecting to the WebServer via VPN. I reckon the connectivity issues could be initially our problem, that Acrobat/Reader simply do not handle.
    Clearing the contents of the Synchronizer folder does not always work.
    Team members have also seen situations where significant numbers of review comments or status values have been not been displayed, but exist in the XML review files. In some cases comments/status values have been automatically deleted without warning from the Adobe review XML files.
    PDF reviews are sent out via email as links to the PDF review file hosted on an internal WebServer.
    Writers and reviewers in the team either have either:
    Acrobat XI Pro 11.0.08 with Reader 11.0.08
    Acrobat XI Pro 9.5.5 with Reader 11.0.08
    I have personally experienced the above behaviour with Acrobat XI Pro 11.0.08 and Reader 11.0.08.
    Could this experience be connected the Synchronizer (http://helpx.adobe.com/acrobat/kb/known-issues-acrobat-xi-reader.html )?
    For example, where a reviewer uses a different version of Acrobat/Reader?
    Could anyone please provide a list of compatible versions of Acrobat/Reader?
    Expected results:
    Adobe Acrobat/Reader should really handle connection issues with a warning and later check for recovered connections.
    However, what appears to be happening is that Acrobat/Reader writes some sort of blocking code to the Synchronizer folder that prevents future checks of the PDF review files on the review WebServer.
    As far as I understand, the connection issues are not caused by Adobe software, however the problems we are experiencing relate to how Adobe Acrobat/Reader handle this loss of connection.
    Plea for Help!
    I have checked and the experience of missing comments and persistent "connectivity issues" seems to be a reported but sadly outstanding issue...
    This has been an ongoing headache for some time, so solutions would be great, but any thoughts or suggestions are welcome...?
         For example, has anyone using SharePoint to host PDF reviews experienced anything similar?
    Many thanks!

    Adobe Acrobat 11.0.09 with Microsoft SharePoint Repository Trial Update
    We've completed week one of a three week trial (see above) that will go on until Friday 24th October.
    We currently have just four PDF files out for review that make use of the SharePoint repository.
    To-date:
    Each review file has had multiple concurrent reviewers posting comments over several days from both over the office LAN and also via VPN when working remotely.
    Dynamic stamps seem to be working as hoped and are all visible.
    We have not experienced any connection-type issues and the we have not had reason to clear out the Synchronizer folder as described above.
    Counter balance:
    I have experienced the situation with a remote VPN connection, where Adobe Acrobat could not connect to our internal WebServer repository. At the same time, I was able to connect to PDF review files hosted in SharePoint.
    Summary:
    The experience so far does suggest that problems are caused by the an inconsistent connection to the WebServer repository (especially over VPN) combined with Adobe Acrobat/Reader's inability to cope with the resulting situation.
    At the moment I must say that while it is early days, I am hopeful that the combination of SharePoint as the repository and the update to Adobe Acrobat/Reader 11.0.09 will continue to prove to be reliable.
    I'll provide an update to this post on Friday 24th October...
    Here's hoping!

  • Complex query - improve performance with nested arrays, bulk insert....?

    Hello, I have an extremely complicated query, that has a structure similar to:
    Overall Query
    ---SubQueryA
    -------SubQueryB
    ---SubQueryB
    ---SubQueryC
    -------SubQueryA
    The subqueries themselves are slow, and having to run them multiple times is much too slow! Ideally, I would be able to run each subquery once, and then use the results. I cannot use standard oracle tables, and i would need to keep the result of the subqueries in memory.
    I was thinking I write a pl/sql script that did the subqueries at the beginning and stored the results in memory. Then in the overall query, I could loop through my results in memory, and join the results of the various subqueries to one another.
    some questions:
    -what is the best data structure to use? I've been looking around and there are nested arrays, and there's the bulk insert functionality, but I'm not sure what is the best to you
    -the advantage of the method I'm suggesting is that I only have to do each subquery once. But, when I start joining the results of the subquery to one another, will I take a performance hit? will Oracle not be able to optimize the joins?
    thanks in advance!
    Coop

    I cannot use standard oracle tablesWhat does this mean? If you have subqueries, i assume you have tables to drive them? You're in an Oracle forum, so i assume the tables are Oracle tables.
    If so, you can look into the WITH clause, it can 'cache' the query results for you and reuse them multiple times, also helpful in making large queries with many subqueries more readable.

  • Onboard video card with shared memory??

    Congratulations Apple on the new Intel-based Mac Mini! I was patiently waiting the arrival of a faster version of the Mini.
    However, I've noticed one horrible mistake. The Intel GMA950 graphics card is attached to the motherboard? What? Apple has decided to make there affordable systems even less gamer-friendly?
    So what you're saying is that if you want to play World of Warcraft or any of the few powerful games designed for the Mac, you must purchase a system that is at least $1,300?
    I just bought a Dell for $250 that runs World of Warcraft and other games rather smoothly. Hmm, lets see, $200 or $1,300? Which would you rather spend?
    Seriously though, Apple needs to sit down and rethink there strategy for the gamers industry.
    How about offering a system that has at least a 64MB video card (at least meet the MINIMUM latest game requirements) and is affordable (say under $1,000 even)?
    The Apple Macintosh is indeed the Cadillac or Mercedes of personal computing when it comes to looks. But PCs are readily becoming more attractive (especially when comparing performance rating and prices).
    Thanks,
    James
    An avid Mac fan since age 12 (my first computer was a Performa 400)

    I've never heard of this before. Is it documented
    anyplace, or on any Mac web site?
    You can read about it in this Thinksecret article: http://www.thinksecret.com/news/0509macmini2.html . Also, if you look at the user comments on Amazon, you'll see a number of references by recipients of the silent upgrade machines.
    From my system profiler:
    Machine Name: Mac mini
    Machine Model: PowerMac10,2
    CPU Type: PowerPC G4 (1.5)
    Number Of CPUs: 1
    CPU Speed: 1.5 GHz
    L2 Cache (per CPU): 512 KB
    Memory: 1 GB
    Bus Speed: 167 MHz
    And video:
    ATI Radeon 9200:
    Chipset Model: ATY,RV280
    Type: Display
    Bus: AGP
    VRAM (Total): 64 MB
    Vendor: ATI (0x1002)
    Device ID: 0x5962
    Revision ID: 0x0001

  • Shared memory in /var/run instead of /tmp

    I use shared memory segment on a diskless node.
    This shared memory segment are stored in /tmp.
    Is there a way to put them in a "memory based" directory, as /var/run (solaris 8) instead of
    the disk-based directory

    My problem was especially to change toe location
    of the shared memory on file system. I succeed
    with FIFO but not with shared memory. I
    don't find any solution.
    concerning the definition of memory base vs. file
    based I have no clue but the following extract from
    docset
    ------[Extract from solaris 8 documentation]-----
    The /var/run File System
    A new TMPFS-mounted file system, /var/run, is the
    repository for temporary system files that are not
    needed across system reboots in this Solaris release
    and future releases. The /tmp directory continues to
    be repository for non-system temporary files.
    Because /var/run is mounted as a memory-based file
    system rather than a disk-based file system, updates
    to this directory do not cause unnecessary disk
    traffic that would interfere with systems running
    power management software.
    The /var/run directory requires no administration.
    You may notice that it is not unmounted with the
    umount -a or the umountall command.
    For security reasons, /var/run is owned by root.

  • Shared Memory Short Dump: SHMM ab_ShmResetLocks - anyone?

    hello:)
         I have a problem with shared memory area (SHMM) that dumps.
    I have a tool that uses a shared memory area instance to store data.
    When I run the tool and update the data stored in the instance, a short dump is thrown:
    SYSTEM_SHM_AREA_OBSOLETE with the explanation:
    An attempt was made to access a shared memory area that has already been
      released. Possible reasons for release:
    - explicit release from an ABAP program or from the shared objects
       management transaction
    - implicit release during resource shortages
    The first one does not match, as I DO NOT explicitly release the instance of my area.
    Moreover, ST22 shows me different parts of code every time, so my code is not the cause, or a specific part of code, or data that I shift.
    The second one would probably be the cause.
    But how can I avoid it?
    Moreover, there in SM21(System Log) there is at this time, there is a runtime error, that says
    ab_ShmResetLocks and nothing more:(
    Shared Memory size is sufficient.
    The Shared Memory Area Settings (SHMA) do not time out the read/write access(Lifetime: no entry and Automatic area structuring is set)
    Any help is more than welcome, why i get the short dump!!
    best regards
    simon:)

    Please check the following SAP notes:
    [SAP Note 1105266 WDA: Runtime error SYSTEM_SHM_AREA_OBSOLETE|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bc_wd/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d31313035323636%7d]
    Also check the below notes for related details.
    Note 1322182 - Memory consumption of ABAP Shared Objects
    [SAP Note 764187 SYSTEM_SHM_AREA_DETACHED runtime error|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bc_aba/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d373634313837%7d]
    Please check the size of the shared memory object in the profile parameter abap/shared_objects_size_MB.
    Regards,
    Dipanjan

  • Shared memory used in Web Dynpro ABAP

    Hi Gurus,
    I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?

    Marek Veverka wrote:
    Hi Gurus,
    >
    > I am using shared memory objects in Web Dynpro ABAP. Everything was working fine until we went live to production. After some research I realized that users are not always able to reach data in shared memory because of different approach of web environment and classic GUI when using more servers. Solution would be to go to database instead of shared memory. However I am still interested if there might be some other way to solve it. Any ideas?
    To my understanding writing to the database is the safe option. There are no other ways to solve your problem with Shared memory.

  • SAPOSCOL is not working(shared memory not available)

    Hi All,
    One of my  Prodn appln server instance is not displaying the data in tcode OS06,
    Even though other instances of app server was able to display;
    Getting message : SAPOSCOL is not working(shared memory not available)
    Envirn :
    Kernel 640 patch 196 ;  64 bit
    Collector Versions
      running                               COLL 20.94 640 - V3.73 64Bit
      dialog                                COLL 20.94 640 - V3.73 64Bit
    What could be reason for it ? Please suggest possible solutions..
    Thanks in Advance

    Hello Ramakrishna,
    I suggest you to kindly go to OS level and restart the saposcol service and check if that works.
    <b>No values and/or problems with shared memory</b>
    Check whether saposcol belongs to user 'root' and whether the authorizations are correct: -rwsr-x---
    Because the values of saposcol must be visible for all R/3 systems/instances on a host, saposcol writes data in the shared memory segment with key 1002. This shared memory segment must not be changed (for example, by setting profile parameter ipc/shm_psize_1002). In all profiles, check whether the parameter ipc/shm_psize_1002 is set. If it is, this parameter must be removed from the profiles ---> Note 37537.
    For further details please refer SAP Note 189072 , 726094.
    Regards,
    Prem

  • Cannot attach data store shared-memory segment using JDBC (TT0837) 11.2.1.5

    Hi,
    I found the thread Cannot attach data store shared-memory segment using JDBC (TT0837) but it can't help me out.
    I encounter this issue in Windows XP, and application gets connection from jboss data source.
    url=jdbc:timesten:direct:dsn=test;uid=test;pwd=test;OraclePWD=test
    username=test
    password=test
    Error information:
    java.sql.SQLException: [TimesTen][TimesTen 11.2.1.5.0 ODBC Driver][TimesTen]TT0837: Cannot attach data store
    shared-memory segment, error 8 -- file "db.c", lineno 9818, procedure "sbDbConnect"
    at com.timesten.jdbc.JdbcOdbc.createSQLException(JdbcOdbc.java:3295)
    at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3444)
    at com.timesten.jdbc.JdbcOdbc.standardError(JdbcOdbc.java:3409)
    at com.timesten.jdbc.JdbcOdbc.SQLDriverConnect(JdbcOdbc.java:813)
    at com.timesten.jdbc.JdbcOdbcConnection.connect(JdbcOdbcConnection.java:1807)
    at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:303)
    at com.timesten.jdbc.TimesTenDriver.connect(TimesTenDriver.java:159)
    I am confused that if I use jdbc, there is no such error.
    Connection conn = DriverManager.getConnection("url", "username", "password");
    Regards,
    Nesta

    I think error 8 is
    net helpmsg 8
    Not enough storage is available to process this command.
    If I'm wrong I'm happy to be corrected. If you reduce the PermSize and TempSize of the datastore (just as a test) does this allow JBOSS to load it?
    You don't say whether this is 32bit or 64bit Windows. If it's the former, the following information may be helpful.
    "Windows manages virtual memory differently than all other OSes. The way Windows sets up memory for DLLs guarantees that the virtual address space of each process is badly fragmented. Other OSes avoid this by densely packing shared libraries.
    A TimesTen database is represented as a single contiguous shared segment. So for an application to connect to a database of size n, there must be n bytes of unused contiguous virtual memory in the application's process. Because of the way Windows manages DLLs this is sometimes challenging. You can easily get into a situation where simple applications that use few DLLs (such as ttIsql) can access a database fine, but complicated apps that use many DLLs can not.
    As a practical matter this means that TimesTen direct-mode in Windows 32-bit is challenging to use for those with complex applications. For large C/C++ applications one can usually "rebase" DLLs to reduce fragmentation. But for Java based applications this is more challenging.
    You can use tools like the free "Process Explorer" to see the used address ranges in your process.
    Naturally, 64-bit Windows basically resolves these issues by providing a dramatically larger set of addresses."

  • 836: Cannot create data store shared-memory segment, error 22

    Hi,
    I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
    I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
    report on it through a J2EE website.
    We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
    only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
    see if we can store about 50-60gb in memory.
    Is this correct? Or are there any caveats in relation to this?
    We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
         836: Cannot create data store shared-memory segment, error 22
         703: Subdaemon connect to data store failed with error TT836
    Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
    Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
    * Existing Oracle Database instances are not adversely impacted
    * We are able to create a Data Store which is able fully utilise the physical memory on the box
    * We don't need to change these settings for quite some time, and still be able to complete our evaluation
    We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
    The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
    Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
    Machine
    ## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
    SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
    FJSV,SPARC64-V
    System Configuration: Sun Microsystems sun4us
    Memory size: 32768 Megabytes
    12 processors
    /etc/system
    set rlim_fd_max = 1080                # Not set on the machine
    set rlim_fd_cur=4096               # Not set on the machine
    set rlim_fd_max=4096                # Not set on the machine
    set semsys:seminfo_semmni = 20           # machine has 0x42, Decimal = 66
    set semsys:seminfo_semmsl = 512      # machine has 0x81, Decimal = 129
    set semsys:seminfo_semmns = 10240      # machine has 0x2101, Decimal = 8449
    set semsys:seminfo_semmnu = 10240      # machine has 0x2101, Decimal = 8449
    set shmsys:shminfo_shmseg=12           # machine has 1024
    set shmsys:shminfo_shmmax = 0x20000000     # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
    $ /usr/sbin/sysdef | grep -i sem
    sys/sparcv9/semsys
    sys/semsys
    * IPC Semaphores
    66 semaphore identifiers (SEMMNI)
    8449 semaphores in system (SEMMNS)
    8449 undo structures in system (SEMMNU)
    129 max semaphores per id (SEMMSL)
    100 max operations per semop call (SEMOPM)
    1024 max undo entries per process (SEMUME)
    32767 semaphore maximum value (SEMVMX)
    16384 adjust on exit max value (SEMAEM)

    Hi,
    I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
    Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
    You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
    TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
    If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
    Regards, Chris

  • How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?

    1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
    When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed  in the stat plan  ?
    2. Does rowsource statistics gives some kind of  understanding of Extended stats ?

    You can get Row Source Statistics only *after* the SQL has been executed.  An Explain Plan midway cannot give you row source statistics.
    To get row source statistics either set STATISTICS_LEVEL='ALL'  in the session that executes theSQL OR use the Hint "gather_plan_statistics"  in the SQL being executed.
    Then use dbms_xplan.display_cursor
    Hemant K Chitale

  • Why can't I use PayPal with Family Sharing? Have a US Visa card but foreign address, and Apple does not seem to be able to cope with that (even though it is a multinational corporation). Could we lean on Apple to change this policy?

    Paypal for Family Sharing - Why can't I use PayPal with Family Sharing? Have a US Visa card but a foreign address, and Apple does not seem to be able to cope with that (even though it is a multinational corporation). So I set up payment with Paypal, and it worked fine. Now I want to take advantage of Family Sharing, but Apple won't allow it. Could we lean on Apple to change this policy?

    If you are not in the US then you cannot use the US store, the store's terms say that you have to be in a country to use it - if you are using the US store then you are risking having your account disabled.
    In terms of feedback for Apple : http://www.apple.com/feedback/

  • [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL

    Hi All,
    I am running an SSIS solution that runs 5 packages in sequence.  Only one package fails because of the error:
    [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    I have added myself to the performance counters group. 
    I am running windows 7 with SSIS 2008.
    Any ideas would be appreciated.  I have read that some have disabled the warning, but I cannot figure out how to disable a warning. 
    Thanks.
    Ivan

    Hi Ivan,
    A package would not fail due the warning itself, speaking of which means the account executing it is not privileged to load the Perf counters, and should thus be safely ignored.
    To fix visit: http://support.microsoft.com/kb/2496375/en-us
    So, the package either has an error or it actually runs.
    Arthur My Blog

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

Maybe you are looking for

  • How do i create a mail folder for my iphone 5c

    I need to make multiple folders in mail but I do not know how to do this.  Can anyone help me?  I appreciate your time.

  • I'm out of "free space" for my iPhones & iPads

    I need to buy more space, but I'm curious if I should just buy the smallest package, or if I need to estimate what my long term needs will be.  Basically, if I buy the smallest package & then need more space, will I get a credit when I have upgrade t

  • PRINTING FORMAT NOT CORRECT

    Hi Experts, Iam Ramya, I am new to Basis Can anybody help me, I am facing a printing problem. When user is tring to print, Print preview is Ok but print is not ok. Thanks in Advance

  • Download multiple reports in one hit

    I am just wondering if there anyway to have apex to download multiple reports with one button? The normal way I deal with reports is create the query within shared components, and then create a button on a page that links to the print URL of the repo

  • Zen V Battery problems + Free

    Hi I bought a Zen V 2gb about 3 weeks ago. When I recharge the battery for 6 hours straight, the battery runs out after only 2 hours of playing music. Why is that? And today my player suddenly froze. Completely. Why was that? Can someone help me plea