ABAP to READ No. of COLUMNS

Hi,
I need abap code to READ NO OF COLUMNS NOT ROWS in a INTERNAL TABLE DYNAMIC created during runtime ?
any codes available or help ?
Regards

Please check the F1 help of DESCRIBE statement. No need of any other code. Just a simple Describe will do.
[DESCRIBE FIELD TYPE type COMPONENTS n|http://help.sap.com/saphelp_45b/helpdata/en/34/8e72f16df74873e10000009b38f9b8/content.htm]
Example
data:wf_infile TYPE REF TO data,
        wf_ltype TYPE c,
         wi_count TYPE i.
field:symbols:<fs_line>   TYPE ANY.
CREATE DATA wf_infile LIKE LINE OF it_data. "In your case IT_DATA is dynamic
ASSIGN wf_infile->* TO <fs_line>.
DESCRIBE FIELD <fs_line> TYPE  wf_ltype  COMPONENTS wi_count.
wi_count will be the result.
Kesav

Similar Messages

  • How to read only particualr columns from excel sheet to internal table

    Hi,
    I have and excel sheet which has around 20 columns, in which i want to read only 6 columns. They are at different column positions, means the 1st column, 6thcolumn, 8th column so on..
    Can we do this in sap? do we have any FM to do this?
    Thanks.
    Praveena.

    hi,
    Use the below logic to fetch the data into internal table..You need to read the data cell by cell and update the internal table,
    DATA l_count TYPE sy-tabix.
       CONSTANTS: lc_begin_col TYPE i VALUE '1',
                  lc_begin_row TYPE i VALUE '2',
                  lc_end_col   TYPE i VALUE '2',
                  lc_end_row   TYPE i VALUE '3000'.
      CLEAR p_i_excel_data. REFRESH p_i_excel_data.
    * Function module to read excel file and convert it into internal table
       CALL FUNCTION 'KCD_EXCEL_OLE_TO_INT_CONVERT'
         EXPORTING
           filename                = p_p_file
           i_begin_col             = lc_begin_col
           i_begin_row             = lc_begin_row
           i_end_col               = lc_end_col
           i_end_row               = lc_end_row
         TABLES
           intern                  = i_data
         EXCEPTIONS
           inconsistent_parameters = 1
           upload_ole              = 2
           OTHERS                  = 3.
    * Error in file upload
       IF sy-subrc NE 0 .
         MESSAGE text-006 TYPE 'E'.
         EXIT.
       ENDIF.
       IF i_data[] IS INITIAL .
         MESSAGE text-007 TYPE 'E'.
         EXIT.
       ELSE.
         SORT i_data BY row col .
    * Loop to fill data in Internal Table
         LOOP AT i_data .
           MOVE i_data-col TO l_count .
           ASSIGN COMPONENT l_count OF STRUCTURE p_i_excel_data TO <fs_source> .
           MOVE i_data-value TO <fs_source> .
           AT END OF row .
    * Append data into internal table
             APPEND p_i_excel_data.
             CLEAR p_i_excel_data.
           ENDAT .
         ENDLOOP .
       ENDIF .

  • Problem reading a xmlType column (return only 999 rows) with XMLTable

    Hello,
    I'm new in the forum.
    Sorry for my english.
    I have a problem reading an xmltype column.
    My oracle's version is 11.1.0.6.0
    I have a table like this:
    Create Table TestXml (idProg number, xmldata XmlType)
    XMLTYPE COLUMN xmldata
    STORE AS BINARY XML;
    I have a xml file containing the equivalent of 10000 record (like a csv).
    my schema xsd is the following:
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified">
         <xs:element name="applicazione">
              <xs:annotation>
                   <xs:documentation>Flusso Monitoraggi</xs:documentation>
              </xs:annotation>
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="periodo" maxOccurs="unbounded">
                             <xs:complexType>
                                  <xs:sequence>
                                       <xs:element name="segmento" maxOccurs="unbounded">
                                            <xs:complexType>
                                                 <xs:sequence>
                                                      <xs:element name="progressivo" maxOccurs="unbounded">
                                                           <xs:complexType>
                                                                <xs:sequence>
                                                                     <xs:element name="stato" type="xs:string"/>
                                                                     <xs:element name="ts_start" type="xs:string"/>
                                                                     <xs:element name="ts_stop" type="xs:string"/>
                                                                     <xs:element name="nota_esecuzione" type="xs:string"/>
                                                                     <xs:element name="ts_esecuzione" type="xs:string"/>
                                                                </xs:sequence>
                                                                <xs:attribute name="valore" type="xs:integer" use="required"/>
                                                           </xs:complexType>
                                                      </xs:element>
                                                 </xs:sequence>
                                                 <xs:attribute name="nome" type="xs:string" use="required"/>
                                            </xs:complexType>
                                       </xs:element>
                                  </xs:sequence>
                                  <xs:attribute name="nome" type="xs:string" use="required"/>
                             </xs:complexType>
                        </xs:element>
                   </xs:sequence>
                   <xs:attribute name="nome" type="xs:string"/>
              </xs:complexType>
         </xs:element>
    </xs:schema>
    When I try to read my xmltype column with this select :
    SELECT ap.desc_applicazione,pe.seq_periodo, pe.desc_elem_temp_ist,
    sg.desc_segmento_elab,pg.seq_progressivo,
    pg.desc_stato,pg.ts_start,pg.ts_stop,pg.nota_esecuzione,pg.ts_esecuzione,
    pg.valore_prog,sg.seq_segmento,idProg
    FROM testXML p,
    XMLTable('/applicazione' PASSING p.xmlData
    COLUMNS
    seq_applicazione for ordinality,
    desc_applicazione VARCHAR2(50) PATH '@nome',
    periodo XMLType PATH 'periodo') ap,
    XMLTable('periodo' PASSING ap.periodo
    COLUMNS
    seq_periodo for ordinality,
    desc_elem_temp_ist VARCHAR2(50) PATH '@nome',
    segmento XMLType PATH 'segmento') pe,
    XMLTable('segmento' PASSING pe.segmento
    COLUMNS
    seq_segmento for ordinality,
    desc_segmento_elab VARCHAR2(50) PATH '@nome',
    progressivo XMLTYPE PATH 'progressivo') sg,
    XMLTable('progressivo' PASSING sg.progressivo
    COLUMNS
    seq_progressivo for ordinality,
    valore_prog NUMBER PATH '@valore',
    desc_stato VARCHAR(10) PATH 'stato',
    ts_start VARCHAR2(50) PATH 'ts_start',
    ts_stop VARCHAR2(50) PATH 'ts_stop',
    nota_esecuzione VARCHAR2(50) PATH 'nota_esecuzione',
    ts_esecuzione VARCHAR2(50) PATH 'ts_esecuzione'
    ) pg
    where idProg = 1
    I obtained only 999 rows.
    I tried with two file xml, one containing 10000 repetition of progressivo, the other containing 10000 repetition of segmento.
    Anybody know why? Where is my error?
    Thank you in advance for any response.
    Maurizio

    Hello,
    I'm new in the forum.
    Sorry for my english.
    I have a problem reading an xmltype column.
    My oracle's version is 11.1.0.6.0
    I have a table like this:
    Create Table TestXml (idProg number, xmldata XmlType)
    XMLTYPE COLUMN xmldata
    STORE AS BINARY XML;
    I have a xml file containing the equivalent of 10000 record (like a csv).
    my schema xsd is the following:
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified">
         <xs:element name="applicazione">
              <xs:annotation>
                   <xs:documentation>Flusso Monitoraggi</xs:documentation>
              </xs:annotation>
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="periodo" maxOccurs="unbounded">
                             <xs:complexType>
                                  <xs:sequence>
                                       <xs:element name="segmento" maxOccurs="unbounded">
                                            <xs:complexType>
                                                 <xs:sequence>
                                                      <xs:element name="progressivo" maxOccurs="unbounded">
                                                           <xs:complexType>
                                                                <xs:sequence>
                                                                     <xs:element name="stato" type="xs:string"/>
                                                                     <xs:element name="ts_start" type="xs:string"/>
                                                                     <xs:element name="ts_stop" type="xs:string"/>
                                                                     <xs:element name="nota_esecuzione" type="xs:string"/>
                                                                     <xs:element name="ts_esecuzione" type="xs:string"/>
                                                                </xs:sequence>
                                                                <xs:attribute name="valore" type="xs:integer" use="required"/>
                                                           </xs:complexType>
                                                      </xs:element>
                                                 </xs:sequence>
                                                 <xs:attribute name="nome" type="xs:string" use="required"/>
                                            </xs:complexType>
                                       </xs:element>
                                  </xs:sequence>
                                  <xs:attribute name="nome" type="xs:string" use="required"/>
                             </xs:complexType>
                        </xs:element>
                   </xs:sequence>
                   <xs:attribute name="nome" type="xs:string"/>
              </xs:complexType>
         </xs:element>
    </xs:schema>
    When I try to read my xmltype column with this select :
    SELECT ap.desc_applicazione,pe.seq_periodo, pe.desc_elem_temp_ist,
    sg.desc_segmento_elab,pg.seq_progressivo,
    pg.desc_stato,pg.ts_start,pg.ts_stop,pg.nota_esecuzione,pg.ts_esecuzione,
    pg.valore_prog,sg.seq_segmento,idProg
    FROM testXML p,
    XMLTable('/applicazione' PASSING p.xmlData
    COLUMNS
    seq_applicazione for ordinality,
    desc_applicazione VARCHAR2(50) PATH '@nome',
    periodo XMLType PATH 'periodo') ap,
    XMLTable('periodo' PASSING ap.periodo
    COLUMNS
    seq_periodo for ordinality,
    desc_elem_temp_ist VARCHAR2(50) PATH '@nome',
    segmento XMLType PATH 'segmento') pe,
    XMLTable('segmento' PASSING pe.segmento
    COLUMNS
    seq_segmento for ordinality,
    desc_segmento_elab VARCHAR2(50) PATH '@nome',
    progressivo XMLTYPE PATH 'progressivo') sg,
    XMLTable('progressivo' PASSING sg.progressivo
    COLUMNS
    seq_progressivo for ordinality,
    valore_prog NUMBER PATH '@valore',
    desc_stato VARCHAR(10) PATH 'stato',
    ts_start VARCHAR2(50) PATH 'ts_start',
    ts_stop VARCHAR2(50) PATH 'ts_stop',
    nota_esecuzione VARCHAR2(50) PATH 'nota_esecuzione',
    ts_esecuzione VARCHAR2(50) PATH 'ts_esecuzione'
    ) pg
    where idProg = 1
    I obtained only 999 rows.
    I tried with two file xml, one containing 10000 repetition of progressivo, the other containing 10000 repetition of segmento.
    Anybody know why? Where is my error?
    Thank you in advance for any response.
    Maurizio

  • My list view is now black with white text. I cannot read the ratings column at all. How to I change this?

    My list view is now black with white text. I cannot read the ratings column at all. How to I change this?

    You have to pay to subscribe to iMatch to be able to delete.

  • Reading MS Project column names and data on the fly from a selected View

    Hi guys,
    I have several views on my project file (MSPROJECT 2010) and I want to build a macro so that;
    1. User can select any view ( Views can have diffrent columns and the user may add new columns as well)
    2. User runs the Macro and all the coulmns along with the tasks displayed in the view will be written to a excel file. ( I don't want to build several macro's for each view, I'm thinking of a common method which would work for any selected view)
    The problem I'm facing is that how will i read the column names and data for a particular view on the fly without hard coding them inside the vba code ?
    The solution needs to work on a master schedule as well.
    Appreciate your feedback.

    Just to get you started the following code writes the field name and data for the active task to the Immediate window.
    Sub CopyData()
    Dim fld As TableField
    For Each fld In ActiveProject.TaskTables(ActiveProject.CurrentTable).TableFields
    If fld.Field >= 0 Then
    Debug.Print Application.FieldConstantToFieldName(fld.Field), ActiveCell.Task.GetField(fld.Field)
    End If
    Next fld
    End Sub
    Rod Gill
    Author of the one and only Project VBA Book
    www.project-systems.co.nz

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • BULK INSERT from a text (.csv) file - read only specific columns.

    I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal.  I can't edit some of the columns that are given in the report.  I am trying to load specific columns from the file.
    bulk insert Orders
    FROM 'C:\Users\*******\Desktop\DownloadURL123.csv'
       WITH
                  FIELDTERMINATOR = ',',
                    FIRSTROW = 2,
                    ROWTERMINATOR = '\n'
    So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
    I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
    FORMATFILE [ = 'format_file_path' ]
    Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
    The data file contains greater or fewer columns than the table or view.
    The columns are in a different order.
    The column delimiters vary.
    There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.

    Date, Time, Time Zone, Name, Type, Status, Currency, Gross, Fee, Net, From Email Address, To Email Address, Transaction ID, Item Title, Item ID, Buyer ID, Item URL, Closing Date, Reference Txn ID, Receipt ID,
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392302", "jdal32", "http://ddd.com", "04/22/03", "", "",
    "04/22/07", "12:00:21", "PDT", "Test", "Payment Received", "Cleared", "USD", "321", "2.32", "3213', "[email protected]", "[email protected]", "", "testing", "392932930302", "jejsl32", "http://ddd.com", "04/22/03", "", "",
    Do you need more than 2 rows? I did not include all the columns from the actual csv file but most of it, I am planning on taking to the first table these specfic columns: date, to email address, transaction ID, item title, item ID, buyer ID, item URL.
    The other table, I don't have any values from here because I did not list them, but if you do this for me I could probably figure the other table out.
    Thank you very much.

  • Read only specific Columns into the cursor

    Hey everybody,
    I'm trying to make a cursor which takes only column 2 to n-2 (when n is the number of cloumns). The table size should be variable.
    Is there any possibility to do that?
    Like
    CURSOR nodes IS SELECT * FROM nn_input WHERE column_id > 1 AND column_id < count(columns)-2;
    Another possibility to solve the problem could be to read all cloumns in the cursor and then, in the for-loop, writing the 2nd to (n-2) value in table1 and the last two values in table2...
    Anyone an idea? Thanks a lot!

    You need to use dynamic SQL and build up your query at run time. You could use something like the following to generate the list of columns...
    CREATE TABLE test_columns
    col1 number,
    col2 number,
    col3 number,
    col4 number,
    col5 number
    SELECT LTRIM(MAX(SYS_CONNECT_BY_PATH(column_name,',')),',')
    FROM   user_tab_columns
    START WITH column_id = &start_col_num
    AND        table_name = 'TEST_COLUMNS'
    CONNECT BY column_id = PRIOR column_id +1
    AND  table_name = PRIOR table_name
    AND  level <= &end_col_num-&start_col_num+1;Using a start_col_num =2 and end_col_num=4 gives:
    LTRIM(MAX(SYS_CONNECT_BY_PATH(COLUMN_NAME,',')),',')
    COL2,COL3,COL4There are lots and lots of other options...
    Greg

  • Read Only ID Columns

    Hi,
    I have several simple look up tables with two columns, ID and DESCRIPTION.
    ID is a numeric primary key.
    They have been put into their own groups with the Layout Style set to "table".
    The ID attribute's "Updateable?" has been set to "When New".
    My problem is that when I go to the pages that edit these tables, some of the ID columns
    are editable while some of the ID columns are read-only. I want all of the ID columns
    to be read-only (except for the new row).
    The ID attribures look identical in the ADF BC Properties Editor for all of the tables.
    Where else should I look for differences?
    Cheers,
    Patrick Cimolini

    Patrick,
    Actually, the line should not be present in the View Object file. Only if it is cleared, the value from the Entity Object should be picked up. I have experimented a bit and I could reproduce one weird case.
    Start by going to the Entity attribute and set it to 'Always'. Then go to the View attribute and set it to 'Always' as well. Now check the ViewObject xml file. The 'isUpdateable' attribute should NOT be there now! If it is, something weird is going on and I would close JDev, remove the attribute in notepad and launch JDev again.
    In the next bit, take care you follow the steps in this exact order. Go to the View Object attribute, and set it to 'While New'. Go to the Entity Attribute and set it to 'Never'. Go back to the View Object attribute. It will now be set to 'Never' and you won't have the option to put it back to 'While New' or 'Always' (greyed out). All very nice and logical, but if you look at the View Object XML file, it still says 'isUpdateable="while_insert"'. This is indeed an inconsistency between the JDev GUI and the actual XML file. As JHeadstart uses the XML file, it will use the 'while_insert' instead of the 'Never' that both the Entity and ViewObject GUI are showing you.
    However, this is the only problem I have found. Could you please make sure that if you set everything back to 'Always', if necessary remove that 'isUpdateable' attribute from the ViewObject XML file as I mentioned above, then only set the 'While New' at the Entity level, and make sure that the 'isUpdateable' property is NOT present at the ViewObject XML file, if everything works as expected when running the generator?
    Kind regards,
    Peter Ebell
    JHeadstart Team

  • Question in ABAP syntax, read & insert data from internal table, while loop

    Hi, SDN Fellow.
    I am from Java background and learnt ABAP, I don't usually write much ABAP code.
    I am trying to implement the following logic in a RFC now.
    I have one z-custom database table, the structure as the following:
    It has two columns, with these sample data.
    Says datable table is ZEMPMGRTAB.
    EmployeeID,ManagerID
    user10,user1
    user9,user1
    user8,user1
    user7,user2
    user6,user2
    user5,user2
    user4,user2
    user2,user1
    The logic is this:
    I have a input parameter, userid. I am using this parameter to have a select statement to query the record into export table,EXPTAB 'LIKE' table ZEMPMGRTAB.
    SELECT * FROM  ZEMPMGRTAB
      into table EXPTAB
       WHERE  EMPLOYEEID  = USERID.
    Say, my parameter value, USERID ='USER4'.
    Referring to the sample data above, I can get the record of this in my EXPTAB,
    EmployeeID,ManagerID
    user4,user2
    Now, I want to iterately use the EXPTABLE-ManagerID
    as the USERID input in SELECT statement, until it has no return result. Then, insert the new records in
    EXPTAB.
    In above new loop case, we will get this table content in EXPTAB,
    EmployeeID,ManagerID
    user4,user2
    user2,user1
    I kind of think of the pseudocode logic as below:
    (These may not be a valid ABAP code, so I need help to convert/correct them)
    DATA:
    IWA TYEP ZZEMPMGRTAB,
    ITAB
    HASHED TABLE OF ZZEMPMGRTAB
    WITH UNIQUE KEY EMPLOYEEID.
    SELECT * FROM  ZEMPMGRTAB
      into table ITAB
       WHERE  EMPLOYEEID  = USERID.
    *Question 1: I cannot insert a internal table to export table, it is *incompatible type, what is the alternative way fo this?
    *Question 2: How can I access thedata of the internal table like this,ITAB-MANAGERID? As if I can, I would do this:
    * IWA-EMPLOYEEE = ITAB-EMPLOYEEID. IWA-MANAGERID = IWA-MANAGERID. INSERT IWA INTO TABLE EXPTAB.
    * Question 3: Is the 'NE NULL' - 'not equal to NULL' is right syntax?
    IF ITAB NE NULL.
    INSERT ITAB INTO EXPTAB.
    ENDIF
    * Question 4: Is my WHILE loop setup right here? And are the syntax right?
    WHILE ITAB NE NULL.
    SELECT * FROM  ZEMPMGRTAB
      into table ITAB
       WHERE  EMPLOYEEID  = ITAB-MANAGERID.
    IF ITAB NE NULL.
    INSERT ITAB INTO EXPTAB.
    ENDIF
    REFRESH ITAB.
    ENDWHILE.
    Assume all the syntax and logic are right, I should get this result:
    EmployeeID,ManagerID
    user4,user2
    user2,user1
    If I have a new entry in datable table,ZEMPMGRTAB like this:
    user1,user0
    My pseudocode logic will get this result:
    EmployeeID,ManagerID
    user4,user2
    user2,user1
    user1,user0
    I truly appreciate if you can help me to validate the above syntax and pseudocode logic.
    Thanks in advance.
    KC

    Hi,
    FUNCTION ZGETSOMEINFO3.
    *"*"Local Interface:
    *"  IMPORTING
    *"     VALUE(USERID) TYPE  AWTXT
    *"     VALUE(FMTYPEID) TYPE  AWTXT
    *"  EXPORTING
    *"     VALUE(RETURN) TYPE  BAPIRETURN
    *"  TABLES
    *"      APPROVERT STRUCTURE  ZTAB_FMAPPROVER
    *"      ACTOWNERT STRUCTURE  ZTAB_FMACTOWNER
    DATA: T_RESULT TYPE STANDARD TABLE OF ZTAB_FMAPPROVER.
    **Question 1: For this line, I got an error says "Program ''USERID" *not found. Is the syntax right, as the USERID is a parameter for the function.
    perform add_line(USERID).
      ENDFUNCTION.
    form add_line using i_user type ZTAB_FMAPPROVER.EMPLOYEEID
                        changing T_RESULT TYPE ZTAB_FMAPPROVER.
    data: ls_row type ZTAB_FMAPPROVER.
    * Get record for i_user
    select single * into ls_row from ZTAB_FMAPPROVER
    where EmployeeID = i_user.
    if sy-subrc NE 0.
    * Do nothing, there is not manager for this employee
    else.
    * Store result
    QUESTION 2: I am still got stuck on this line of code. It still *says that "T_RESULT" is not an internal table "OCCURS n" *specification is missing. I thought the line: "T_RESULT TYPE *ZTAB_FMAPPROVER" means declare internal table, T_RESULT as type of ZTAB_FMAPPROVER". Am I understand it wrongly?
    append ls_row to t_result.
    * Call recursion
    perform add_line using ls_row-ManagerID
                              changing t_result.
    endif.
    endform.
    Thanks,
    KC

  • Reading dynamic table column based on user selection

    Hi there,
    I am having a problem of reading and manipulating the data stored in a standard SAP table. The following example simulates the table and what i am trying to do:
    Table: Storing sales data for sales person
    SALES_PERSON    REGION   YEAR   MTH_S1  MTH_S2  MTH_S3 MTH_S4...
    Richard  S               NORTH    2007     100          200         300        400
    John K                    SOUTH    2007      50           100         100        20
    Brad P                    NORTH    2007     300          100         100        50
    User have have the following selection option:
    1. Month.
    The program will calculate the sales based on the individual month selected
    Example, if user select Month = 3, then program take only MTH_S3 column value
    So total sales = 300100100=500
    2. Month range
    The program will calculate the sales based on the month range selected
    Example, if user select Month 2 to 4, then program take MTH 2 to MTH_S4 columns value
    So total sales = 400 (for MTH_S2) + 500 (for MTH_S3) + 470 (for MTH_S4) = 1370
    How should i write the logic or code for this requirement?
    Hope someone can help.
    Thanks,
    Pang HK

    Try something like this
    TABLES:
    t247.
    SELECT-OPTIONS:
      s_month FOR t247-mnr NO-EXTENSION.
    DATA:
      BEGIN OF fs_data,
        person(30),
        area(10),
        year(4),
        mon1 TYPE kbetr,
        mon2 TYPE kbetr,
        mon3 TYPE kbetr,
        mon4 TYPE kbetr,
        mon5 TYPE kbetr,
      END OF fs_data,
      t_data LIKE STANDARD TABLE OF fs_data,
      w_no_months TYPE i,
      w_kbetr TYPE kbetr,
      w_total TYPE kbetr.
      LOOP AT t_data INTO fs_data.
        CLEAR w_kbetr.
        DO 5 TIMES VARYING w_kbetr FROM fs_data-mon1
                                                          NEXT fs_data-mon2.
        IF sy-index IN s_month.
          w_total = w_total + w_kbetr.
        ENDIF.
        ENDDO.
      ENDLOOP.
    change the value 5, according to the no.of months in ur internal table

  • Reading a semi column delimited multi line Flat file on KM repository

    Hi,
    I have a requirement in our project to read a multi line, semi column delimited flat from kept on the Knowledge Management repository and display the contents in the Portal.
    I had tried couple of options and was unable to read the file. I am not sure which are the correct APIs I should be using.
    Can any one of the experts could guide me with the sample code, that will be great!
    Your early response is highly appreciated.
    Regards
    Venkat

    here you go.
    //******* Read file from KM
    String repository_km;
    String FileURL;
    repository_km="/documents/data/data.txt";     
      try
    //Getting the user......
      IWDClientUser wdClientUser = WDClientUser.getCurrentUser();
      IUser sapUser = wdClientUser.getSAPUser();
      com.sapportals.portal.security.usermanagement.IUser ep5User =
      WPUMFactory.getUserFactory().getEP5User(sapUser);
    //Getting the Resource.........
      IResourceContext resourseContext = new ResourceContext(ep5User);
      IResourceFactory resourseFactory = ResourceFactory.getInstance();
    //path to the KM Folder ("/documents/data/data.txt")
      RID rid= RID.getRID(repository_km);
      com.sapportals.wcm.repository.IResource resource =
      resourseFactory.getResource(rid, resourseContext);
    if (resource != null)
         String text = "";
         BufferedReader in = new BufferedReader(new InputStreamReader(resource.getContent().getInputStream()));
         int count=0;
         while ((text = in.readLine()) != null)
         strText[count] =text;
         count++;
    catch (RuntimeException e2) {
    wdComponentAPI.getMessageManager().reportException("Error in reading file from KM : "+e2.getMessage(),true);
    //                  TODO Auto-generated catch block

  • How to read the blob column?

    Hi,
    i wanted to store the zip file in a blob column and i also want to download the file
    i have tried in the following manner
    -- Creation of the table
    create table demo
    ( ID int,
    theblob blob);
    --insert the zip file into the table
    declare
    l_blob blob;
    l_bfile bfile;
    begin
    insert into demo values ( 5, empty_blob() )
    returning theBlob into l_blob;
    l_bfile := bfilename( 'MWDIR_TST', 'demo.zip' );
    dbms_lob.fileopen( l_bfile );
    dbms_lob.loadfromfile( l_blob, l_bfile,
    dbms_lob.getlength( l_bfile ) );
    dbms_lob.fileclose( l_bfile );
    end;
    -- Function to convert the blob into clob
    CREATE OR REPLACE FUNCTION XBLOB_To_CLOB(L_BLOB BLOB) RETURN CLOB IS
    L_CLOB               CLOB;
    L_SRC_OFFSET     NUMBER;
    L_DEST_OFFSET     NUMBER;
    L_BLOB_CSID          NUMBER := DBMS_LOB.DEFAULT_CSID;
    V_LANG_CONTEXT     NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    L_WARNING          NUMBER;
    L_AMOUNT          NUMBER;
    BEGIN
    IF DBMS_LOB.GETLENGTH(L_BLOB) > 0 THEN
    DBMS_LOB.CREATETEMPORARY(L_CLOB, TRUE);
    L_SRC_OFFSET := 1;
    L_DEST_OFFSET := 1;
    L_AMOUNT := DBMS_LOB.GETLENGTH(L_BLOB);
    DBMS_LOB.CONVERTTOCLOB(L_CLOB,
    L_BLOB,
    L_AMOUNT,
    L_SRC_OFFSET,
    L_DEST_OFFSET,
    1,
    V_LANG_CONTEXT,
    L_WARNING);
    RETURN L_CLOB;
    ELSE
    L_CLOB:= TO_CLOB('');
    RETURN L_CLOB;
    End IF;
    DBMS_LOB.FREETEMPORARY(L_CLOB);
    END XBLOB_To_CLOB;
    -- Procedure to wtire clob into file
    CREATE OR REPLACE PROCEDURE Write_CLOB_To_File ( directory_name varchar2,filename varchar2, clob_loc CLOB )
    IS
    buffer VARCHAR2(32767);
    buffer_size CONSTANT BINARY_INTEGER := 32767;
    amount BINARY_INTEGER;
    offset NUMBER(38);
    file_handle UTL_FILE.FILE_TYPE;
    BEGIN
    file_handle := UTL_FILE.FOPEN(location => directory_name,filename => filename,open_mode => 'w',max_linesize => buffer_size);
    amount := buffer_size;
    offset := 1;
    -- READ FROM CLOB / WRITE OUT TO DISK
    WHILE amount >= buffer_size
    LOOP
    DBMS_LOB.READ(lob_loc => clob_loc,amount => amount,offset => offset,buffer => buffer);
    buffer:=replace(buffer,chr(13),'');
    offset := offset + amount;
    UTL_FILE.PUT(file => file_handle,buffer => buffer);
    UTL_FILE.FFLUSH(file => file_handle);
    END LOOP;
    UTL_FILE.FCLOSE(file => file_handle);
    END Write_CLOB_To_File;
    -- To execute use the following example
    declare TmpClob CLOB;
    begin
    select XBLOB_TO_CLOB(theblob) into TmpClob from demo where id=5;
    Write_Clob_To_File('TEMP','demo.txt',TmpClob);
    end;
    when i am excuting the above code it is not bringing the exact binary values from the database ?
    can any one help me on this?
    Thanks
    Rangan S

    On another note it's always a good idea to tell us what database version you are using e.g. 9.2.0.7 or 10.2.0.3 etc. as this will allow us to tailor our answers appropriately in case there are features we can't use because you are using an older version.
    And, when posting code, please put {noformat}{noformat} tags around it so that it keeps its formatting on the forum.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • ABAP -WebDynpro - Read URL Parameter

    Hi,
    We have a Custom 'Catalog' that needs to be called from SRM using OCI (Open Catalog Interface). We are developing this custom catalog using ABAP Web Dynpro. We have following issues because of which we are unable to communicate back to Shopping cart from the Custom Catalog:
    1. Unable to read the value of 'HOOK_URL' in WebDynpro application. We need this value so that it can be used as Exit URL. How and where do we read this URL parameter in ABAP WebDynpro ?
    2. How can we return the 'HTML Form' to SRM with the values from Custom Catalog to fill the shopping cart ?
    I would appreciate your help...
    Thanks
    Meenal

    Hi Menal,
    I had the same problem.
    Did you define the HOOK_URL parameter in your webservice ? If not, do this 1st.
    2nd
    In you Webdynpro read the HOOK_URL in your handledefault method your window. Store it e.g. as attrribute in you assistance class.
    method handledefault .
    *&- Add FOF and HOOK_URL to assistanceclass attributes
      wd_assist->hook_url = hook_url.
    endmethod.
    After this you can read your HOOK_URL e.g in a view via:
    concatenate 'HOOK_URL=' wd_assist->hook_url zlv_hook_url into zlv_hook_url
    I hope your question is answered.
    What I want to now is: How to controll the HOOK_URL so that the formfields of the extenal catalog is added to the shopping card. Do you now that ?
    John

  • How to read a total column of a table?

    Hi All,
    I have a requirement where I have a hidden column in a table . I need to read its total value and place it as another fields total value.
    Note : Result table of the page is not an advanced table.
    OAMessageStyledTextBean HiddenAmount =
    (OAMessageStyledTextBean) webBean.findChildRecursive("HiddenAmount");
    if ( HiddenAmount!=null) {
    strHiddenTotal = HiddenAmount.getAttributeValue(HiddenAmount.TABULAR_FUNCTION_SUM)+"";
    strHiddenTotal1 = HiddenAmount.getAttributeValue(HiddenAmount.TABULAR_FUNCTION_SUM)+"";
    strHiddenTotal2 = HiddenAmount.getAttributeValue(TABULAR_FUNCTION_VALUE_ATTR)+"";
    But so far I am getting only null values.
    Thanks in advance
    Deep

    Hi Gaurav,
    Its not a advanced table. I have used the region style: Table. "HiddemAmount" is the column name of type MessageStyledText, for which I want to read total.
    Edited by: user13535721 on Dec 24, 2010 5:25 AM

Maybe you are looking for

  • Pl/sql block returning multiple rows

    Hi, I've created a plsql block which obtains an id from a name and then uses this id in another sql statement. The select statement to get the id works fine and the correct id is placed into the variable awardID. when i try to use this variable in an

  • HT201210 i cant connect my iPhone to iTunes because it has been deactivated.

    Hey, this other day my iPhone 4 came after i had ordered it from apple.com and after i had started the phone and using it i made a pincode for the phone. After that i forgot the pincode, and i had to wait 5 min, 15 min, 1 hour, then i had to plug in

  • Automatic clearing for Credit Memos

    Hi Experts, When Credit Memos are Created with Reference to Billing document, we would like to have the Credit memos Cleared Automatically. How can we set that, So that Credit Memos(with Reference) are Automatically Cleared? Thanks Montee

  • PM work order tables in SAP

    Do anyone know which table in SAP contains the functional location and equipment info of an work order. I can find most of the info of an order in AUFK but missing the two fields mentioned above. Thanks

  • E90 FirmWare Update Problem

    I had updated my E90 phone firmware from Nokia Software Updater. The update process went through easily but the phone did not restart properly. Phone screen is shows Nokia written on the screen and then the screen goes blank and the phone screen agai