How to see blob as "normal data"

Hi ,
i have a table with a column of BLOB type
how can i see it as normal data ? i.e text or numeric instead of binary
tks & rdgs

And if the BLOB does not contain "normal" data, but pure binary data such as a PNG image, a Cisco router executable, or a Microsoft .xls file?
If you need to see "normal" data, then you should be asking yourself why that data is stored in a BLOB - and not as a varchar2, number, date or even CLOB data type.
The DBMS_LOB PL/SQL package provides an API for BLOBs and CLOBs. This can be used to read a chunk at a time and display this data in a custom format.

Similar Messages

  • In Debugging how can we  view the normal data instead of  Hexadecimal data

    While i am debugging if i am checked  internal table structure fields. Hexa decmial data is coming up. how can we check normal data.

    If u r using Standard Debugger (Old Debugger) U will have a icon with + or - (search button type) beside ur fields. Click that. If that is in + mode u will see ur actual data, if it is in - u will see Hexa data.
    awrd points if helpful
    Bhupal

  • HT4221 How to see photo info, like date taken?

    In the photo viewer, how can I see the photo's date taken ?

    You can't in the Photos app itself, but there are other apps that you can use which show that (and other info) e.g. iPhoto, Photo Manager Pro

  • How to transfer BLOB type of data from SQL Server to Oracle

    Hi,
    Actually, I create a table with BLOB type data in SQL server. In fact, there is not exact BLOB type in SQL server, it will be separated to image and ntext types. But there is exact BLOB type in Oracle.
    I don't know how to transfer this "BLOB" type into Oracle with DTS or any other methods.
    Many Thanks for your any suggestions,
    Cathy

    JAVA_GREEN wrote:
    No i haven't mixed up.But the file from where i have to retrieve the data is in csv format.Even though i created another csv driver.and tried but i cud not find a solution to load/transfer a set of records from one file(in Excel/csv format) to another file(in mdb format).plz help me.Is there any other methods for this data transfer.A csv file is NOT an excel file.
    The fact that Excel can import a csv file doesn't make it an excel file.
    If you have a csv file then you must use a csv driver or just use other code (not jdbc) to access it. There is, normally, a ODBC (nothing to do with java) text driver that can do that.

  • How to see target groups in Data Target in the APD

    Hi everybody.
    I am trying to make a marketing segmentation of customers in the APD (BI) and I need to send the results to a target group in CRM, my problem is that I can´t see target groups in the u201CData Target CRMu201D.
    I select the Logical System and when I select the Data Target the system shows   CRM error: Data target TARGET_GROUP_FROM_B not known.
    I know that the communication is OK, the RFC is working cause I can see Marketing Attributes (I released data target for replication and maintained attributes) but I don´t know if I need to do something similar with Target Groups of CRM or how.
    Does anyone have an idea how can I see target groups in the window of the Data Target?
    Thanks.

    Hello,
    I have seen  this problem in other systems and it was caused by a GUI bug. Can you please check that you have the latest GUI patch installed for your GUI  release.
    Best Regards,
    Des

  • How to see tablespace size in data dictionary

    How can I see the tablespace size, and used tablespace size in data dictionary view?
    it is not in dba_tablespaces, and v$tablespace
    Thanks.

    I like this little piece of code of mine:
    SQL> SELECT
      2          TABLESPACE_NAME,
      3          RPAD(RPAD('|',100-PCT_FREE,'X'),100) || '|' USED
      4  FROM (SELECT TABLESPACE_NAME,
      5    100 -
      6      ROUND(100 -(SUM(BYTES)/1024/1024/1024)*100/
      7                  (SELECT SUM(BYTES)/1024/1024/1024
      8                   FROM DBA_DATA_FILES DF
      9                   WHERE DF.TABLESPACE_NAME=FS.TABLESPACE_NAME),2
    10            ) PCT_FREE,
    11    ROUND(SUM(BYTES)/1024/1024/1024,2) GIBFREE
    12  FROM DBA_FREE_SPACE FS
    13  GROUP BY TABLESPACE_NAME) TABLESPACE_SPACE
    14* ORDER BY TABLESPACE_NAME;
    TABLESPACE_NAME                USED
    ARCHDATA                       |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX      |
    DATBIGGX                       |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                  |
    DATGX                          |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                                                      |
    DATLOWGX                       |XXXXXXXXXXXXXXXXXXXXXX                                                                             |
    IDXBIGGX                       |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                 |
    IDXLGX                         |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                |
    IDXGX                          |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX          |
    LOGMNRTS                       |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                                           |
    SYSTEM                         |XXXXXXXXXXXXXXXX                                                                                   |
    TOOLS                          |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                                                   |
    UNDOGX                         |XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX                                         |Yoann.

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

  • How can i convert the firefox-history-timestamp in places.sqlite into a normal date format? The firefox-timestamp is not equivalent to the unix-timestamp, that's why I ask. I could not find a conversion function. Does anyone know something about this?

    As I opened the places.sqlite with an sqlite-editor I found out that firefox saves the last_visit_date via a timestamp which is 16 digits long. I realized that the first 10 digits are similar to the corresponding unix-timestamp but not equal. So.. how can i convert the firefox-timestamp into a normal date? Or into the corresponding unix-timestamp?

    Write a bash script or 'C' program to change the date format and then use the sql update function to receive the stdio output 'where date_field='embeded date value'.

  • How can i view my iphone contents on PC. How can i view my backup data in itunes. How can i see my SMS and contact on PC

    Hi,
    Can anybody help me out on the follwoing questions
    how can i view my iphone contents on PC. How can i view my backup data in itunes. How can i see my SMS and contact on PC

    The data isn't actually stored in iTunes.  iTunes is like a card catalog that allows you to access information from the various places that it's stored on the computer.  SMS and contact info will be viewable in whatever program you sync contact information.  The backup actually keeps the files, but doesn't make them viewable, I'm pretty sure.  You can only see the info you've actually synced.

  • How to see data in the multi-provider ?

    We have a multiprovider with the key figure as total stock, and its given as a non-cumulative kf which has a check mark next to it. Its getting data from the infocube to it, I mean there are 4 infocubes and 1 ods to the multi-provider. If I right click and see the infocube or the multi-provider and select the fields by rite click and I am unable to see the total stock kf. But if I click the actual multiprovider and click display instead of display data, I am able to see the keyfigures folder, if i expand that I see stock quantity folder and many other folders. If I expand stock quantity folder, there are 3 kf's ztotalstock, 0isstock and 0recstock. In those ztotalstok is non-cumulative checked. And I am unable to see the kf value when i rite click on the mp. So any suggestion to display the data by selecting a posting date in the mp. What my guess is , will there be any internal table/view to display the mp data. Please let me know any suggestions.
    Thanks for your time,

    Prasanth, entering the mp name in LISTCUBE is same as right click on the mp and click on display data, where it will give the selecting criteria for the fields. But here both will display the same fields, that is not my case. I am unable to see one of the key figure which is inside the mp, have you opened any mp. Please right click on it and choose display and then go to the key figures folder , if you open that you have lot of folders in it based on the category. IN my case its the stock kf folder and if you expand that there are three kf's out of the three, one is ztotal stock which is declared as non-cumulative kf, rite next to it, option is checked. So I think that is the reson we are not able to see ztotalstock kf , no matter in the listcube or display data option on the mp. Hope you got my situation. I know how to see the data in the multi-provider or infocube. But in this case its not displaying that ztotal stock keyfigure, but in the reporting we can see the data on the mp. My question is why am I not able to see the selection for that ztotal stock key figure , either in listcube or rite clik display data option. Did anybody face this situation.
    Thank You,

  • How to see data in a transaction ODS

    Hi all,
    I am populating data into my Transactional ODS through the function module RSDRI_ODSO_INSERT_RFC . But not able to see the data in it. as there is no manage option to the ODS. and by default an export datasource is created for the ODS. do i have to create one more standard ods and use this export datasource as update rules and see the data?
    Please let me know how to see the data. and also and detail document on Transaction ODS?.

    Hello Satish,
    Data loaded through Planning Application eg. In Transactional ODS will not have manage option or request. You can view the data using following methods.
    Since transactional ODS objects cannot be filled with BW data using staging (data is not supplied from the DataSources), they are not displayed in the Scheduler or in the Monitor. Transactional ODS objects can therefore not be updated in the same way as standard ODS objects.
    If you switch a standard ODS object that already has update rules available to a transactional one, the update rules are set as inactive and are no longer processable.
    As no change log is generated, no delta update of data stored at the end of the process is possible.
    You cannot set the indicator for BEx Reporting when creating a transactional ODS object.
    1.) In order to report this, you can create an InfoSet and then execute a BEx query for it.
    2) You can also download the data from your transactional ODS object using the download function. You can find this function in the administration of the ODS objects, Tab Page Contents -> Active Data.  Choose Execute to display the data. Using the main menu Edit -> Download, you can download the data in different formats.
    Hope it helps.
    San.

  • How to see data in hierarchy

    Experts,
    I was searching on the forums, but could not find an appropriate thread on this. I want to see data
    1. In PSA tables or equivalent for the hierachy data after succesful infopackage load. I can see the number of records in the monitor for infopack, but how to see the actual data for that run some where like 'ALE Inbox'.
    2. When I click on 'display' of the hierarchy, I can see the infoobjects under the hierarchy, but how to see the actual data inside those infoobjects?
    Please list the tcodes if possible.

    Hi Latha,
    Whenever you create a hierarchy and activate it, an internal ID called Hierarchy ID will be generated and this information is stored in the table RSHIEDIR. In SE12 go to this table and in place of HIENM give the name of your Hierarchy and IOBJNM as the name of the infoobject for which you created this hierarchy(0costelmnt in this case). With this you will get a HIEID.
    Pass this HIEID to the Hierarchy (H) table of the costelement infoobject(/BI0/HCOSTELMNT) and here you will find all the 9 records that you loaded for your hierarchy. Hope this helps solve your issue.

  • How to convert BLOB data into string format.

    Hi,
    I have problem while converting blob data into string format.
    for example,
    Select dbms_lob.substr(c.shape.Get_wkb(),4000,1) from geotable c
    will get me the first 4000 byte of BLOB .
    When i using SQL as i did above,the max length is 4000, but i can get 32K using plsql as below:
    declare
    my_var CLOB;
    BEGIN
    for x in (Select X from T)
    loop
    my_var:=dbms_lob.substr(x.X,32767,1)
    end loop
    return my_var;
    I comfortably convert 32k BLOB field to string.
    My problem is how to convert blob to varchar having size more than 32K.
    Please help me to resolve this,
    Thanx in advance for the support,
    Nilesh

    Nilesh,
    . . . .The result of get_wkb() will not be human readable (all values are encoded into some binary format).
    SELECT utl_raw.cast_to_varchar2(tbl.geometry.get_wkt()) from FeatureTable tbl;
    -- resulting string:
        ☺AW(⌂özßHAA
    Å\(÷. . . .You may also want to have a look at { dbms_lob | http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i1015792 } "The DBMS_LOB package provides subprograms to operate on BLOBs, CLOBs, NCLOBs, BFILEs, and temporary LOBs."
    Regards,
    Noel

  • How we can see the abap memory data

    How we can see the abap-memory data
    fine the code below
    import lsind
             report_title
             table_name
             report_field
             change_display
             show_hide
             conversion_exits
             table_description
             form_program
             select_form
             update_form
             line_size
             line_count
             records[]
             fields[]
             header_fields[]
             select_fields[]
             xrep[]
             from memory id 'LZUT5U11'.
    Regards
    santhosh
    mail-id : [email protected]

    Dear Santosh,
    ABAP MEMORY:
    A logical memory model illustrates how the main memory is distributed from the view of executable programs. A distinction is made here between external sessions and internal sessions .
    An external session is usually linked to an R/3 window. You can create an external session by choosing System/Create session, or by entering /o in the command field. An external session is broken down further into internal sessions. Program data is only visible within an internal session. Each external session can include up to 20 internal sessions (stacks).
    Every program you start runs in an internal session.
    All "squares" with rounded "corners" displayed in the status diagram represent a set of data objects in the main memory.
    The data in the main memory is only visible to the program concerned.
    CALL TRANSACTION and SUBMIT AND RETURN open a new internal session that forms a new program context. The internal sessions in an external session form a memory stack. The new session is added to the top of the stack.
    When a program has finished running, the top internal session in the stack is removed, and the calling program resumes processing.
    The same occurs when the system processes a LEAVE PROGRAM statement.
    LEAVE TO TRANSACTION removes all internal sessions from the stack and opens a new one containing the program context of the calling program.
    The ABAP memory is initialized after the program is called. In other words, you cannot transfer any data to a program called with LEAVE TO TRANSACTION via the ABAP memory.
    SUBMIT replaces the internal session of the program performing the call with the internal session of the program that has been called. The new internal session contains the program context of the called program with which it is performed.
    When a function module is called, the following steps are executed:
    A check is made to establish whether your program has called a function module of the same function group previously.
    If this is not the case, the system loads the associated function group to the internal session of the calling program as an additional program group. This initializes its global data.
    If your program used a function module of the same function group before the current call, the function module that you have called up at present can access the global data of the function group. The function group is not reloaded.
    Within the internal session, all of the function modules that you call from the same group access the global data of that group.
    If, in a new internal session, you call a function module from the same function group as in internal session 1, a new set of global data is initialized for the second internal session. This means that the data accessed by function modules called in session 2 may be different from that accessed by the function modules in session 1.
    You can call function modules asynchronously as well as synchronously. To do so, you must extend the function module call using the addition STARTING NEW TASK ''. Here, '' is a symbolic name in the calling program that identifies the external session, in which the called program is executed.
    Function modules that you call using the addition STARTING NEW TASK '' are executed independently of the calling program. The calling program is not interrupted.
    To make function modules available for local asynchronous calls, you must identify them as executable remotely (processing type: Remote-enabled module).
    There are various ways of transferring data between programs that are running in different program contexts (internal sessions). You can use:
    (1) The interface of the called program (standard selection screen, or interface of a
    subroutine, function module, or dialog module)
    (2) ABAP memory
    (3) SAP memory
    (4) Database tables
    (5) Local files on your presentation server.
    For further information about transferring data between an ABAP program and your presentation server, refer to the documentation for the function modules WS_UPLOAD and WS_DOWNLOAD.
    Function modules have an interface, which you can use to pass data between the calling program and the function module itself (there is also a comparable mechanism for ABAP subroutines). If a function module supports RFC, certain restrictions apply to its interface.
    If you are calling an ABAP program that has a standard selection screen, you can pass values to the input fields. There are two options here:
    By using a variant of the standard selection screen in the program call
    By passing actual values for the input fields in the program call
    If you want to call a report program without displaying its selection screen (default setting), but still want to pass values to its input fields, there is a variety of techniques that you can use.
    The WITH addition allows you to assign values to the parameters and select-options fields on the standard selection screen.
    If the selection screen is to be displayed when the program is called, use the addition: VIA SELECTION-SCREEN.
    Use the pattern button in the ABAP Editor to insert a program call via SUBMIT. The structure shows you the names of data objects that you can complete with the standard selection screen.
    For further information on working with variants and further syntax variants for the WITH addition, see the key word documentation in the ABAP Editor for SUBMIT.
    You can use SAP memory and ABAP memory to pass data between different programs.
    The SAP memory is a user-specific memory area for storing field values. It is available in all of the open sessions in a user's terminal session, and is reset when the terminal session ends. You can use its contents as default values for screen fields. All external sessions can access SAP memory. This means that it is only of limited use for passing data between internal sessions.
    The ABAP memory is also user-specific, and is local to each external session. You can use it to pass any ABAP variables (fields, structures, internal tables, complex objects) between the internal sessions of a single external session.
    Each external session has its own ABAP memory. When you end an external session (/i in the command field), the corresponding ABAP memory is released automatically.
    To copy a set of ABAP variables and their current values (data cluster) to the ABAP memory, use the EXPORT TO MEMORY ID statement. The (up to 32 characters) is used to identify the different data clusters.
    If you repeat an EXPORT TO MEMORY ID statement to an existing data cluster, the new data overwrites the old.
    To copy data from ABAP memory to the corresponding fields of an ABAP program, use the IMPORT FROM MEMORY ID statement.
    The fields, structures, internal tables, and complex objects in a data cluster in ABAP memory must be declared identically in both the program from which you exported the data and the program into which you import it.
    To release a data cluster, use the FREE MEMORY ID statement.
    You can import just parts of a data cluster with IMPORT, since the objects are named in the cluster.
    In the SAP memory, you can define memory areas (SET/GET parameters, or parameter IDs), which you can then address by a name of up to 20 characters.
    You can fill these memory areas either using the contents of input/output fields on screens, or using the ABAP statement:
    SET PARAMETER ID '' FIELD .
    The memory area with the name now has the value .
    You can use the contents of a memory area to display a default value in an input field on a screen.
    You can also read the memory areas from the SAP memory using the ABAP statement GET PARAMETER ID FIELD . The field then contains the value from parameter .
    The link between an input/output field and a memory area in SAP memory is inherited from the data element on which the field is based. You can enable the set parameter or get parameter attributes in the input/output field attributes.
    Once you have set the Set parameter attribute for an input/output field, you can fill it with default values from SAP memory. This is particularly useful for transactions that you call from another program without displaying the initial screen. For this purpose, you must activate the Set parameter functionality for the input fields of the first screen of the transaction.
    You can:
    (1) Copy the data that is to be used for the first screen of the transaction to be called to the parameter ID in the SAP memory. To do so, use the statement SET PARAMETER immediately before calling the transaction.
    (2) Start the transaction using CALL TRANSACTION or LEAVE TO
    TRANSACTION . If you do not want to display the initial screen, use the AND
    SKIP FIRST SCREEN addition.
    (3) The system program that starts the transaction fills the input fields that do not already have default values and for which the Get parameter attribute has been set with values from SAP memory.
    The Technical information for the input fields in the transaction you want to call contains the names of the parameter IDs that you need to use.
    Parameter IDs should be entered in table TPARA. This happens automatically if you create them via the Object navigator.
    Programs that you call using the statements SUBMIT , LEAVE TO TRANSACTION , SUBMIT AND RETURN, or CALL TRANSACTION run in their own SAP LUW, and update requests receive their own update key.
    When you use SUBMIT and LEAVE TO TRANSACTION , the SAP LUW of the calling program ends. If no COMMIT WORK statement occurred before the program call, the update requests in the log table remain incomplete and cannot be processed. They can no longer be executed. The same applies to inline changes that you make using PERFORM … ON COMMIT.
    Data that you have written to the database using inline changes is committed the next time a new screen is displayed.
    If you use SUBMIT AND RETURN or CALL TRANSACTION to insert a program and then return to the calling program, the SAP LUW of the calling program is resumed when the called program ends. The LUW processing of calling and called programs is independent.
    In other words, inline changes are committed the next time a new screen is displayed. Update requests and calls using PERFORM ... ON COMMIT require an independent COMMIT WORK statement in the SAP LUW in which they are running.
    Function modules run in the same SAP LUW as the program that calls them.
    If you call transactions with nested calls, each transaction needs its own COMMIT WORK, since each transaction maps its own SAP LUW.
    The same applies to calling executable programs, which are called using SUBMIT AND RETURN.
    The statement CALL TRANSACTION allows you to
    Shorten the user dialog when calling using CALL TRANSACTION USING .
    Determine the type of update (asynchronous, local, or synchronous) for the transaction called. For this purpose, use the addition CALL TRANSACTION USING UPDATE 'update_mode', where update_mode can have the values a (asynchronous), L (local), or S (synchronous).
    Combining the two options enables you to call several transactions in sequence (logical chain), to reduce their screen sequence, and to postpone processing of the SAP LUW 2 until processing of the SAP LUW 1 has been completed.
    When you call a function module asynchronously using the CALL FUNCTION STARTING NEW TASK ' ' statement, it runs in its own SAP LUW.
    Programs that are executed with a SUBMIT AND RETURN or CALL
    TRANSACTION statement starts their own LUW processing. You can use these to perform nested (complex) LUW processing.
    You can use function modules as modularization units within an SAP LUW.
    Function modules that are called asynchronously are suitable for programs that allow parallel processing of some of their components.
    All techniques are suitable for including programs with purely display functions.
    Note that a function module called with CALL FUNCTION STARTING NEW TASK is executed as a new logon. It, therefore, sees a separate SAP memory area. You can use the interface of the function module for data transfers.
    Example: In your program, you want to call a display transaction that is displayed in a separate window (amodal). To do so, you encapsulate the transaction call in a function module, which you set as to Remote-enabled module. You use the function module interface to accept values that you write to the SAP memory. You then call up the transaction in the function module using CALL TRANSACTION AND SKIP FIRST SCREEN. You call the function module itself asynchronously.
    Type ‘E' locks for nested program calls may be requested more than once from the same object. This behavior can be described as follows:
    Lock entries from function modules called synchronously increment the cumulative counter, And are therefore successful.
    Lock entries from programs called with CALL TRANSACTION or SUBMIT
    AND
    RETURN is refused. The object to be locked by the called program is displayed as already Locked by another user.
    Programs that you call using SUBMIT or LEAVE TO TRANSACTION cannot come into conflict with lock entries from the calling program, since the old program ends when the call is made. When a program ends, the system deletes all of the lock entries that it had set.
    Lock requests belonging to the same user from different R/3 windows or logons are treated as lock requests from other users.
    Regards,
    Rajesh.
    Please reward points if found helpful.

  • How to see the datas stored in DBMS_SQL.Varchar2S variable?

    how to see the datas stored in DBMS_SQL.Varchar2S variable?
    it says error if i use dbms_out.put_line.

    in PLSQL :
    procedure p_try (p_test IN OUT DBMS_SQL.VARCHAR2S) is
    begin
        p_test.delete ;
        p_test(    -3000) := '===============' ;
        p_test(       22) := 'Hello'  ;
        p_test(    55555) := 'World' ;
        p_test(987654321) := '===============' ;
    end p_try;
    set serveroutput on
    declare
         l_test dbms_sql.varchar2s ;
         i number ;
    begin
         p_try (l_test) ;
         i :=  l_test.first ;
         while i >= l_test.first and i <= l_test.last loop
                 dbms_output.put_line (l_test(i)) ;
                 i := l_test.next(i) ;
         end loop ;
    end ;
    ===============
    Hello
    World
    ===============when using Forms, you would use TEXT_IO instead of DBMS_OUTPUT

Maybe you are looking for

  • How do I get my money back from a failed app?

    I purchased Maptech navigation and it does not work.  How do I get my money back?

  • New-line character in stacked message pop-up

    I have run into a brick wall with this situation-- Our clients are running Client/Server Forms 6i, and Web Forms 6i using Patch 10. They are about to upgrade web forms to the latest version 9.0.4.1.0 (10g). We have a few PLL routines that use the sta

  • "Please update your App"

    Hi there. I got this massage, om my ipad in the Adobe viewer - after updating the software from the DPS homepage. "Please update your App This issue is available for download but requires a newer version of the app. Please update your app from the Ap

  • How to  fetch the relational  data from the xml file registered in xdb

    Hi, I have to register the xml file into the  xdb repository and i have to fetch the data of the xml file as relational structure  through the select statement . i used the below query to register the xml file in xdb. DECLARE v_return BOOLEAN; BEGIN

  • Version 2.3:  When/Why did SUMIF(S) referencing TODAY() stop working?

    I've been using a series of spreadsheets (1 for each client) for years, and just now realized that SUMIF and SUMIFS statements using TODAY() aren't working.  There's no error generated; they simply never get added in, even if the condition(s) be true