Reading from ABAP memory, not available in call stack

Hi,
I need to read a table from ABAP memory. It is not available from the call stack, so I can’t use the standard ‘assign’ approach. The internal table is listed under System areas -> Area ITABS-HEADS with the name \FUNCTION-POOL=MLSP\DATA=IY_ESLL[] 
Is it even possible to read this table? Seems as though I have to access function-pool MLSP to find it.
Regards,
Damian

Hi,
The main program of this function pool is SAPLMLSP. If you in any of theses includes can add a small form that returns the content of internal table ( IY_ESLL[]  ) that should solve your problem.
In the program that need the data, write something like :
PERFORM Z_GET_MLSP_DATA(SAPLMLSP) using GT_ESLL.
This form can be created within any sub-include within the SAPLMLSP.
However, with a quick look at SAPLMLSP does not reveal any user modifiable includes, but I didn't check very carefully.
If you are on ECC 6.0, there are plenty of enhancement spots, which could be used for this purpose.

Similar Messages

  • I haven't get any receipt of purchases since mid January 2014. How can I contact and let the itune fix this? (I live in Thailand, not available to call or find local support.)

    I haven't get any receipt of purchases since mid January 2014. How can I contact and let the itune fix this? (I live in Thailand, not available to call or find local support.)

    Has the ISP modems and routers been power cycled then tested again? 
    Possible ISP connection issues or LinksysSmartWiFi.com service outage maybe...
    Are FW versions on all routers up to date? 
    To safely update FW, I recommend doing the following: Download the FW file from the support site first.
    Disable the auto update feature on the router. <If this was on, possible someting was updated recently. I recommend leaving this OFF.
    1. Save router config to file first using IE or FF with all security add-ons disabled.
    2. Factory reset the router with all other devices disconnected or turned OFF accept for 1 wired PC.
    3. Reload or Update the FW using IE or FF. Just download the FW file to your local wired LAN PC.
    4. Factory reset the router and then set up from scratch first and test with out loading the saved config from file. Check to see if any problems are fixed before loading the saved config from file. Sometimes you need to set up from scratch with out loading the saved config file. Just safer that way.

  • Error in Inbound Queue : Internally required memory not available

    Hi,
         We are trying to send 110MB file from R/3 to Legacy thru XI. It is failing in XI inbound queue with error <b>Internally required memory not available</b>. When i saw memory usage in XI server it is using full 3GB of extended memory and entering in private mode. We dont have any problem upto 50MB files.
    Please any inputs.
    Regards,
    Ranjeeth.

    try to process the data by breaking the message into dependant segments

  • SAPOsCol running but not working (shared memory not available)

    Dear Forum,
    We have just set new passwords for SAP AD users (incl SAPService<SID> and <SID>adm), after this restarted SAP instances and servers / clusters). Everything came up well and all services, incl SAPOsCol started up automatically with the new passwords.
    Everything works fine - only this morning i wanted to have a look in ST06 and the program tells me "SAPOSCOL not running? (shared memory not available).
    The service is running tho, would a restart of the service do any difference, and is there any possibility that a restart of the service on a live and running system would force a system restart? (i couldnt think of any, but I hate restarting services on a live running production system).
    Would it be an option to stop/start the collector from ST06?
    Thanks in advance,
    Kind regards,
    Soren

    Hello Kaushal,
    Thank you for your answer!
    I tried to start and stop/start the service from ST06, but it doesnt work. Here is a content og the log - any ideas how i could start the collector without booting the server?
    HWO description
          SAPOSCOL version  COLL 20.95 701 - 20.64 NT 07/10/17, 64 bit, multithreaded, Non-Unicode
          compiled at   Feb 24 2009
          systemid      562 (PC with Windows NT)
          relno         7010
          patch text    COLL 20.95 701 - 20.64 NT 07/10/17
          patchno       32
          intno         20020600
          running on    L5183N01 Windows NT 6.0 6002 Service Pack 2 16x AMD64 Level 6 (Mod 26 Step 5)
          PATCHES
          DATE     CHANGELIST           PLATFORM             PATCHTEXT
          20081211 1032251              ALL                  Option -w support. Removed SizeOfRecord warning rt 130.
          20090114 1036522              UNIX                 Log file permissions: 664.
          20090203 1041526              UNIX                 Add. single instance check.
          20090210 1042962              ALL                  Continue after EWA HW XML generation failure.
    09:46:39 12.04.2010   LOG: Profile          : no profile used
    09:46:39 12.04.2010   LOG: Saposcol Version  : [COLL 20.95 701 - 20.64 NT 07/10/17]
    09:46:39 12.04.2010   LOG: Working directory : C:\usr\sap\PRFCLOG
    09:46:39 12.04.2010   LOG: Allocate Counter Buffer [10000 Bytes]
    09:46:39 12.04.2010   LOG: Allocate Instance Buffer [10000 Bytes]
    09:46:40 12.04.2010   LOG: Shared Memory Size: 118220.
    09:46:40 12.04.2010   LOG: Connected to existing shared memory.
    09:46:40 12.04.2010   LOG: MaxRecords = 1037 <> RecordCnt + Dta_offset = 1051 + 61
    09:46:55 12.04.2010 WARNING: WaitFree: could not set new shared memory status after 15 sec
    09:46:55 12.04.2010 WARNING: Cannot create Shared Memory
    thanks!
    Soren
    Edited by: Soeren Friis Pedersen on Apr 12, 2010 9:49 AM

  • Shared Memory not available

    Hi,
    I'm using MaxDB Community Edition 7.6.06.10
    Now the second customer has the error
    -24700 Could not start DBM Server
    -24832 Shared memory not available
    -24744 Shared Memory from different platform Unknown (0x00) (expected WIN32, I386)
    Both installations worked fine over several weeks/months.
    What can I do?
    Thank you,
    Christoph Schwerdtner

    Hi
    I was given this fix by SAP Support.
    In the directory X:\sapdb\data\wrk you will find two files SID.dbm.shm and SID.dbm.shi.
    Rename these files to have a .old extension and restart your DBM session.  These files will be recreated automatically.
    This can be done while the database is online.
    Regards
    Doug

  • SAPOSCOL is not working(shared memory not available)

    Hi All,
    One of my  Prodn appln server instance is not displaying the data in tcode OS06,
    Even though other instances of app server was able to display;
    Getting message : SAPOSCOL is not working(shared memory not available)
    Envirn :
    Kernel 640 patch 196 ;  64 bit
    Collector Versions
      running                               COLL 20.94 640 - V3.73 64Bit
      dialog                                COLL 20.94 640 - V3.73 64Bit
    What could be reason for it ? Please suggest possible solutions..
    Thanks in Advance

    Hello Ramakrishna,
    I suggest you to kindly go to OS level and restart the saposcol service and check if that works.
    <b>No values and/or problems with shared memory</b>
    Check whether saposcol belongs to user 'root' and whether the authorizations are correct: -rwsr-x---
    Because the values of saposcol must be visible for all R/3 systems/instances on a host, saposcol writes data in the shared memory segment with key 1002. This shared memory segment must not be changed (for example, by setting profile parameter ipc/shm_psize_1002). In all profiles, check whether the parameter ipc/shm_psize_1002 is set. If it is, this parameter must be removed from the profiles ---> Note 37537.
    For further details please refer SAP Note 189072 , 726094.
    Regards,
    Prem

  • Fm to read from SAP memory

    Hi,
    I need a fm whic will read the sub-equipments currnetly dismanteled for a superior equipment.
    I am using tcode ieo2 and enhancement IEQM0003
    Plz help

    Hi
    you can use export and import parameter i.d to read from sap memory
    Regards
    Divya

  • SAPOSCOLL not running...shared memory not available...!!!!

    Hi All,
    The saposcoll is always getting terminated with a error message "shared memory not available".We have tried all possible remedies but it was of no use, There is no version mismatch and and there is also one saposcoll service running on that host.The problem is coming after we have upgraded our system.
    Please suggest me some suitable notes or any solution for it as it is a production server.
    Ashwini

    Hi,
    Please have a look at below link.
    SAPOSCOL not running ? (shared memory not available )
    Also Check SAP Note 548699 - FAQ: OS collector SAPOSCOL.
    Hope this helps.
    Thanks
    Sushil

  • SAPOSCOL not runnin? (Shared memory not available)

    Hi SAP Experts,
    OS:AIX, DB:DB2; R/3: 4.7 ;  SID:PW1;
    ST06 showing  SAPOSCOL is not running
    Status Bar shows:SAPOSCOL not runnin? (Shared memory not available)
    I checked below
    cpaiss51:pw1adm 6> id
    uid=15271(pw1adm) gid=506(sapsys) groups=720(sysadm)
    pw1adm 7> saposcol -s
    Shared Memory:           not attached
    no further information available ***
    file permission is ok
    -rwsr-x---   1 root     sapsys      1118225 Sep 14 2006  saposcol
    cpaiss51:pw1adm 10> file saposcol
    saposcol:       64-bit XCOFF executable or object module not stripped
    Advanced thanks for Quick reply
    Thanks & Regards
    Kishore

    Hi Ruchit,
    SAPOSCOL -s  is already pasted (It shows the Status of SAPOSCOL)
    DEV_COLL
    ========
    cpaiss51:pw1adm 61> more dev_coll
          SAPOSCOL version  COLL 20.89 640 - AIX v6.55 5L-64 bit 060323, 64 bit, single threaded, Non-Unicode
          compiled at   May  8 2006
          systemid      324 (IBM RS/6000 with AIX)
          relno         6400
          patch text    COLL 20.89 640 - AIX v6.55 5L-64 bit 060323
          patchno       126
          intno         20020600
          running on    cpaiss51 AIX 1 5 000F3B6F4C00
    15:30:06 26.10.2006   LOG: Profile          : no profile used
    15:30:06 26.10.2006   LOG: Saposcol Version  : [COLL 20.89 640 - AIX v6.55 5L-64 bit 060323]
    15:30:06 26.10.2006   LOG: Working directory : /usr/sap/tmp
    15:30:06 26.10.2006   LOG: Process Monitoring active.
    15:30:06 26.10.2006   LOG: searching for Process Monitoring Templates in /usr/sap/tmp/dev_proc
    15:30:06 26.10.2006   LOG: searching for Process Monitoring Templates in /usr/sap/tmp/procmon/
    15:30:06 26.10.2006   LOG: INFO: /usr/sap/tmp/procmon does not exist or cannot be opened.
    15:30:06 26.10.2006   LOG: INFO: Files for Process Monitoring in /usr/sap/tmp/procmon are ignored.
    15:30:06 26.10.2006   LOG: INIT    : Disk data collected for 5 disks (0 filtered)
    15:30:06 26.10.2006   LOG: INIT    : Network data collected for 3 interfaces
    15:30:06 26.10.2006   LOG: INIT    : Global CPU data collected.
    15:30:06 26.10.2006   LOG: INIT    : CPU data collected for 4 cpus
    Thanks & Regards,
    Kishore Reddy

  • SAPOSCOL not running:Shared Memory not available

    Dears,
    On my PI 7.1 server when I go to ST06 I get message:
    SAPOSCOL not running:Shared Memory not available
    Please suggest.
    Shivam

    few suggestions
    FAQs on saposcol - http://www.saptechies.com/os-collector-saposcol/
    SAP Note 165188 - saposcol not running on AIX-DB2
    SAP Note 548699 - FAQ: OS collector SAPOSCOL
    SAP Note 103135 - DB2-z/OS: Installing saposcol manually
    Regards
    Sekhar

  • ORA-08176: consistent read failure; rollback data not available

    Hi,
    We implemented UNDO management on our servers and started getting these errors for few of our programs.:
    ORA-08176: consistent read failure; rollback data not available
    These errors were not coming when we were using the old rollback segments and we have not changed any code on our server.
    1. What is possibly causing these errors?
    2. Why did they not surface with rollback segments but started appearing when we implemented AUM and Temporary TS (instead of fixed TS used as temporary TS).
    Our environment:
    RDBMS Version: 9.2.0.5
    Operating System and Version: Windows 2000 AS SP5
    Thanks
    Satish

    NOt much in the alert.log. I looked at the trace file, it also does not have much information:
    ORA-12012: error on auto execute of job 7988306
    ORA-20006: ORA-20001: Following error occured in Lot <4407B450Z2 Operation 7131> Good Bad rollup.ORA-08176: consistent read failure; rollback data not available
    ORA-06512: at "ARIES.A_SP$WRAPPER_ROLLUPS", line 106
    ORA-06512: at line 1
    *** SESSION ID:(75.13148) 2004-11-23 09:16:14.281
    *** 2004-11-23 09:16:14.281
    ORA-12012: error on auto execute of job 7988556
    ORA-20006: ORA-20006: Following error occured in Lot <3351A497V1 Operation 7295> For No FL Rollup, Updating T_GOOD.ORA-08176: consistent read failure; rollback data not available
    ORA-06512: at "ARIES.A_SP$WRAPPER_ROLLUPS", line 106
    ORA-06512: at line 1
    *** SESSION ID:(75.16033) 2004-11-23 09:28:10.703
    *** 2004-11-23 09:28:10.703
    The version we have is :
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
    PL/SQL Release 9.2.0.5.0 - Production
    CORE 9.2.0.6.0 Production
    TNS for 32-bit Windows: Version 9.2.0.5.0 - Production
    NLSRTL Version 9.2.0.5.0 - Production
    Thanks
    Satish

  • Zfs list on solaris express 11 always reads from disk (metadata not cached)

    Hello All,
    I am migrating from OpenSolaris 2009.11 to SolarisExpress 11.
    I noticed that "zfs list" takes longer than usual, and is not instant. I then discovered via a combination of arcstat.pl and iostat -xnc 2 that every time a list command is issued, there are disk reads. This leads me to believe that some metadata is not getting cached.
    This is not the case in OpenSolaris where repeated "zfs list" do not cause disk reads.
    Has anyone observed this, and do you know of any solution?
    This is on an IDLE sustem with 48 GB of RAM - with plenty of free memory.

    Hi Steve,
    Great info again. I am still new to dtrace, particularly navigating probes and etc. I've seen that navigation tree before.
    I would like to start by answering your questions:
    Q) Have you implemented any ARC tuning to limit the ARC?
    -> No out of the box config
    Q) Are you running short on memory? (the memstat above should tell you)
    -> Definetelly not. I have 48 GB ram, ARC grows to about 38 GB and then stops growing. I can reproduce problem at boot at will with only 8GB used. Nothing is getting aged out of the ARC at that time. Only those metadata reads never get stored.
    Q) Are any of your fileystems over 70% full?
    -> No. I am curious, what changes when this happens? Particularly in regards to ARC - perhaps another discussion, I don't want to distract this subject.
    Q) Have you altered what data is/is not cached? ($ zfs get primarycache)
    -> No - everything should be cached. I also have recently added l2cache (80GB). The metadata is not cached there neither.
    I am not yet familiar with dtrace processing capabilities, thus I had to parse output via perl. Notice how each execution has the exact same number of misses. This is due to the fact that these particular datablocks (metadata blocks) are not cached in the arc at all:
    :~/dtrace# perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h' ^C
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11021 has exited
    $VAR1 = {
              ':arc-hit' => 2,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11026 has exited
    $VAR1 = {
              ':arc-hit' => 1,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11031 has exited
    $VAR1 = {
              ':arc-hit' => 12,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11036 has exited
    $VAR1 = {
              ':arc-hit' => 4,
              ':arc-miss' => 192
    :~/dtrace# ./zfs_list.d -c 'zfs list' |perl -MData::Dumper -e 'while (<>) {if (/.+(:arc-hit|:arc-miss).*/) { $h{$1}+=1}} print Dumper \%h'
    dtrace: script './zfs_list.d' matched 4828 probes
    dtrace: pid 11041 has exited
    $VAR1 = {
              ':arc-hit' => 27,
              ':arc-miss' => 192
    :~/dtrace# I presume next steps would be to perform stack analysis on which blocks are been not cached. I don't know how to do this ... I am guessing this is a mid-function probe? "| arc_read_nolock:arc-miss" I don't know how to access it's parameters.
    FYI, here's an example of a cache miss in my zfs list:
      0  -> arc_read                             
      0    -> arc_read_nolock                    
      0      -> spa_guid                         
      0      <- spa_guid                         
      0      -> buf_hash_find                    
      0        -> buf_hash                       
      0        <- buf_hash                       
      0      <- buf_hash_find                    
      0      -> add_reference                    
      0      <- add_reference                    
      0      -> buf_cons                         
      0        -> arc_space_consume              
      0        <- arc_space_consume              
      0      <- buf_cons                         
      0      -> arc_get_data_buf                 
      0        -> arc_adapt                      
      0          -> arc_reclaim_needed           
      0          <- arc_reclaim_needed           
      0        <- arc_adapt                      
      0        -> arc_evict_needed               
      0          -> arc_reclaim_needed           
      0          <- arc_reclaim_needed           
      0        <- arc_evict_needed               
      0        -> zio_buf_alloc                  
      0        <- zio_buf_alloc                  
      0        -> arc_space_consume              
      0        <- arc_space_consume              
      0      <- arc_get_data_buf                 
      0      -> arc_access                       
      0       | arc_access:new_state-mfu         
      0        -> arc_change_state               
      0        <- arc_change_state               
      0      <- arc_access                       
      0     | arc_read_nolock:arc-miss           
      0     | arc_read_nolock:l2arc-miss 
      0      -> zio_read                         
      0        -> zio_create                     
      0          -> zio_add_child                
      0          <- zio_add_child                
      0        <- zio_create                     
      0      <- zio_read                         
      0      -> zio_nowait                       
      0        -> zio_unique_parent              
      0          -> zio_walk_parents             
      0          <- zio_walk_parents             
      0          -> zio_walk_parents             
      0          <- zio_walk_parents             
      0        <- zio_unique_parent              
      0        -> zio_execute                    
      0          -> zio_read_bp_init             
      0            -> zio_buf_alloc              
      0            <- zio_buf_alloc              
      0            -> zio_push_transform         
      0            <- zio_push_transform         
      0          <- zio_read_bp_init             
      0          -> zio_ready                    
      0            -> zio_wait_for_children      
      0            <- zio_wait_for_children      
      0            -> zio_wait_for_children      
      0            <- zio_wait_for_children      
      0            -> zio_walk_parents           
      0            <- zio_walk_parents           
      0            -> zio_walk_parents           
      0            <- zio_walk_parents           
      0            -> zio_notify_parent          
      0            <- zio_notify_parent          
      0          <- zio_ready                    
      0          -> zio_taskq_member             
      0          <- zio_taskq_member             
      0          -> zio_vdev_io_start            
      0            -> spa_config_enter           
      0            <- spa_config_enter           
      0            -> vdev_mirror_io_start       
      0              -> vdev_mirror_map_alloc    
      0                -> spa_get_random         
      0                <- spa_get_random         
      0                -> vdev_lookup_top        
      0                <- vdev_lookup_top        
      0              <- vdev_mirror_map_alloc    
      0              -> vdev_mirror_child_select 
      0                -> vdev_readable          
      0                  -> vdev_is_dead         
      0                  <- vdev_is_dead         
      0                <- vdev_readable          
      0                -> vdev_dtl_contains      
      0                <- vdev_dtl_contains      
      0              <- vdev_mirror_child_select 
      0              -> zio_vdev_child_io        
      0                -> zio_create             
      0                  -> zio_add_child        
      0                  <- zio_add_child        
      0                <- zio_create             
      0              <- zio_vdev_child_io        
      0              -> zio_nowait               
      0                -> zio_execute            
      0                  -> zio_vdev_io_start    
      0                    -> spa_syncing_txg    
      0                    <- spa_syncing_txg    
      0                    -> zio_buf_alloc      
      0                    <- zio_buf_alloc      
      0                    -> zio_push_transform 
      0                    <- zio_push_transform 
      0                    -> vdev_mirror_io_start
      0                      -> vdev_mirror_map_alloc
      0                      <- vdev_mirror_map_alloc
      0                      -> vdev_mirror_child_select
      0                        -> vdev_readable  
      0                          -> vdev_is_dead 
      0                          <- vdev_is_dead 
      0                        <- vdev_readable  
      0                        -> vdev_dtl_contains
      0                        <- vdev_dtl_contains
      0                      <- vdev_mirror_child_select
      0                      -> zio_vdev_child_io
      0                        -> zio_create     
      0                          -> zio_add_child
      0                          <- zio_add_child
      0                        <- zio_create     
      0                      <- zio_vdev_child_io
      0                      -> zio_nowait       
      0                        -> zio_execute    
      0                          -> zio_vdev_io_start
      0                            -> vdev_cache_read
      0                              -> vdev_cache_allocate
      0                              <- vdev_cache_allocate
      0                            <- vdev_cache_read
      0                            -> vdev_queue_io
      0                              -> vdev_queue_io_add
      0                              <- vdev_queue_io_add
      0                              -> vdev_queue_io_to_issue
      0                                -> vdev_queue_io_remove
      0                                <- vdev_queue_io_remove
      0                              <- vdev_queue_io_to_issue
      0                            <- vdev_queue_io
      0                            -> vdev_accessible
      0                              -> vdev_is_dead
      0                              <- vdev_is_dead
      0                            <- vdev_accessible
      0                            -> vdev_disk_io_start
      0                            <- vdev_disk_io_start
      0                          <- zio_vdev_io_start
      0                        <- zio_execute    
      0                      <- zio_nowait       
      0                    <- vdev_mirror_io_start
      0                  <- zio_vdev_io_start    
      0                  -> zio_vdev_io_done     
      0                    -> zio_wait_for_children
      0                    <- zio_wait_for_children
      0                  <- zio_vdev_io_done     
      0                <- zio_execute            
      0              <- zio_nowait               
      0            <- vdev_mirror_io_start       
      0          <- zio_vdev_io_start            
      0          -> zio_vdev_io_done             
      0            -> zio_wait_for_children      
      0            <- zio_wait_for_children      
      0          <- zio_vdev_io_done             
      0        <- zio_execute                    
      0      <- zio_nowait                       
      0    <- arc_read_nolock                    
      0  <- arc_read                              I've compared the output of a single-non cached metadata read, to a single read from filesystem by running dd and read from a file that is not in the cache. The only difference in the stack is that the non-cached reads are missing:
      0                                -> vdev_queue_offset_compare
      0                                <- vdev_queue_offset_compare This is called in "-> vdev_queue_io_to_issue ". But I don't think this is relevant, perhaps related to metadata vs file data read.
    What do you think should be next?

  • How to read From SAP Memory

    hello friends,
    can u help me to read the document no. generated at runtime in BDC from
    SAp Memory.
    Thanks

    HI,
    See this both the eq u will be get ur requirement.
    report  ykrish_set_prg1.
    data : g_ebeln type ekko-ebeln.
    select-options : s_ebeln for g_ebeln obligatory.
    data: begin of it_ekko occurs 0,
            ebeln type ekko-ebeln,
            bukrs type ekko-bukrs,
            bstyp type ekko-bstyp,
            bsart type ekko-bsart,
          end of it_ekko.
    start-of-selection.
      select ebeln  bukrs
             bstyp  bsart
        into table it_ekko
        from ekko
        where ebeln in s_ebeln.
      if sy-subrc = 0.
        sort it_ekko by ebeln.
      endif.
    end-of-selection.
      if not it_ekko[] is initial.
        loop at it_ekko.
          write :/ it_ekko-ebeln hotspot on,
                   it_ekko-bukrs,
                   it_ekko-bstyp,
                   it_ekko-bsart.
          hide it_ekko-ebeln.
        endloop.
      endif.
    at line-selection.
      set parameter id 'BES' field it_ekko-ebeln.
      write :/ 'Parameter ID is set for Document Number :', it_ekko-ebeln.
    report  ykrish_get_prg1.
    data : g_ebeln type ekko-ebeln.
    get parameter id 'BES' field g_ebeln.
    call transaction 'ME23N'.
    Regards,
    Naresh

  • Error : Reading from Aggregation Level not permitted

    Hello Gurus,
          Could somebody please give some help or advice regarding this?
    I have a multiprovider on a regular cube and an aggregation level, for some reason the multicube gives me the following error message when I try to display data using listcube.
    Reading from Aggregation Level is not permitted
    Message no. RSPLS801
    Also the  Query on the multicube does not display data for any of the KF's in the Agg Level but when I create a query on the Agg level itself it is fine.
    Any suggestions?
    Thanks.
    Swaroop.
    Edited by: Swaroop Chandra on Dec 10, 2009 7:29 PM

    Hi,
    transaction LISTCUBE does not support all InfoProviders, e.g. aggregation level are not supported. LISTCUBE is a 'low level' to read data from the BW persistence layer, e.g. InfoCubes. Since aggregation level always read transaction data via the so called planning buffer and the planning buffer technically is a special OLAP query LISTCUBE does not support aggregation level.
    Regards,
    Gregor

  • Results from another Query - not available

    HI,
    My environment is Business Objects XI 3.1 SP2 Edge series , i have  below quereis with web Intelligence Reports
    1. not available  the functions/options  Results from another Query(Any) or Results from another Query(ALL) at Query Level.
    2. not getting list of Values for pronpt until i refresh values for prompt?
    Please suggest me is there any fix packs available for the same to availle that functionality.
    Best Regards,
    Reddeppa K

    not getting list of Values for pronpt until i refresh values for prompt?
    There is option called  "Automatic refresh before use"  for the object properties available in the universe designer.
    Please check the box for the object you are using for populating the list of values and export the universe.
    not available the functions/options Results from another Query(Any) or Results from another Query(ALL) at Query Level.
    There is a limitation for the query on query functionality that the both the queries can-not be from the OLAP universe.
    I guess the query which needs to be filtered should be built on universe from the relational data base.
    Regards,
    Rohit

Maybe you are looking for