XML fields importing in separate rows in Excel

When importing the *.xml files from a returned form into Excel 2010, the fields correctly populate into columns, but each field also gets a separate row. It seems the xml is specifying a 'new row' for each field, when it isn't required. This creates a diagonal display of information (headers along row 1, info in A2, B3, C4, D5...)
When I import multiple *.xml, i get multiple diagonal displays, which look pretty, but aren't very functional.
Any ideas on how to get this to display correctly?
Thanks

Hello, Allan:
Thanks for the reply...I never would have divined the
SQL statement that you provided:)
IMHO, learning how to mix SQL with XML-related stuff
seems to be non-trivial in many cases. Even for the
people who who XPath/XSLT the exact SQL syntax is not
necessarily obvious:(
Is there a tutorial/FAQ that provides lots of examples
that illustrate useful/interesting tips/techniques?
I think many people would benefit from such an FAQ.
For the truly ambitious, this could be the start of
an interesting book (e.g., "XML and SQL in Oracle").
Regards,
Oswald

Similar Messages

  • Adding every other field of two separate rows together.

    Hi,
    We're using Oracle 11.1.
    I's like to add two rows from two different queries together,
    I'd like it to look something like:
    with
    a as
    ( select 'a' q,'b' r,'c' s,'d' t,'e' u from dual),
    b as
    ( select '1' q, '2' r,'3' s,'4' t,'5' u from dual)
    select a.q, b.r, a.s, b.t, a.u from a, b;The trouble is I'm already using WITH and both the 'real queries' get their data from the progressive WITH statements already up top.
    I'd like to add every other field from each query to a result row.
    I have to use only SQL.
    Is there a way I can use UNION and combine the rows using dummy fields for the fields I don't want to use?
    Some something like
    I know this isn't right but...it sort of what I want to do.
    select * from
    ( select 'a' q,' ' r,'c' s,' ' t,'e' u from dual
    union
    select '1' q, '2' r,' ' s,'4' t,' ' u from dual);Anybody?

    Can u try using join.
    select a.q, b.r, a.s, b.t, a.u from
    (select 'a' q,'b' r,'c' s,'d' t,'e' u from dual) as a,
    ( select '1' q, '2' r,'3' s,'4' t,'5' u from dual) as b;
    This will result in a Cartesian join, but if you have a common column between a & b, then you can use it in the where clause.

  • How do I import one xml file into 3 separate tables in db?

    I need to utilize xslt to import one xml file into 3 separate tables: account, accountAddress, streetAddress
    *Notice the missing values in middleName, accountType
    sample xml
    <account>
    <firstName>Joe</firstName>
    <middleName></middleName>
    <lastName>Torre</lastName>
    <accountAddress>
    <streetAddress>
    <addressLine>myAddressLine1</addressLine>
    <addressLine>myAddressLine2</addressLine>
    </streetAddress>
    <city>myCity</city>
    <state>myState</state>
    <postalCode>mypostalCode</postalCode>
    </accountAddress>
    <accountId>A001</accountId>
    <accountType></accountType>
    <account>
    I need the following 3 results in 3 separate xml files in order for me to upload into my 3 tables.
    Result #1
    <rowset>
    <row>
    <firstName>Joe</firstName>
    <lastName>Torre</lastName>
    <accountId>A001</accountId>
    <row>
    <rowset>
    Result #2
    <rowset>
    <row>
    <addressId>1</address>
    <city>myCity</city>
    <state>myState</state>
    <postalCode>myPostalCode</postalCode>
    <row>
    <rowset>
    Result #3
    <rowset>
    <row>
    <addressId>1</addressId>
    <addressLineSeq>1</addressLineSeq>
    <addressLine>myAddressLine1</addressLine>
    <row>
    <row>
    <addressId>1</addressId>
    <addressLineSeq>2</addressLineSeq>
    <addressLine>myAddressLine2</addressLine>
    <row>
    <rowset>

    Use XSU to store in multiple tables.
    "XSU can only store data in a single table. You can store XML across tables, however, by using the Oracle XSLT processor to transform a document into multiple documents and inserting them separately. You can also define views over multiple tables and perform insertions into the views. If a view is non-updatable (because of complex joins), then you can use INSTEAD OF triggers over the views to perform the inserts."
    http://download-west.oracle.com/docs/cd/B19306_01/appdev.102/b14252/adx_j_xsu.htm#i1007013

  • SSIS Excel import skip first rows

    Hello,
    1. Is it possible during import data from Excel to DB table skip first 6 rows for example?
    2. Also Excel data divided by sections with headers. Is it possible for example to skip every 12th row?
    Thank you,V. A.

    1.In Excel connection you cannot remove the 1st n rows as you want.
    2. Not possible.
    1.  YES YOU CAN.  Actually, you can do this very easily if you know the number columns that will be imported from your Excel file.  In your Data Flow task, you will need to set the "OpenRowset" Custom Property of your Excel Connection (right-click
    your Excel connection > Properties; in the Properties window, look for OpenRowset under Custom Properties).  To ignore the first 5 rows in Sheet1, and import columns A-M, you would enter the following value for OpenRowset:  Sheet1$A6:M  (notice,
    I did not specify a row number for column M.  You can enter a row number if you like, but in my case the number of rows can vary from one iteration to the next)
    2. AGAIN, YES YOU CAN.  You can import the data using a conditional split.  You'd configure the conditional split to look for something in each row that uniquely identifies it as a header row; skip the rows that match this 'header logic'. 
    Another option would be to import all the rows and then remove the header rows using a SQL script in the database...like a cursor that deletes every 12th row.  Or you could add an identity field with seed/increment of 1/1 and then delete all rows
    with row numbers that divide perfectly by 12.  Something like that...

  • How to import all column heading of excel to oracle 6i forms(no rows).

    hello,
    I am new to oracle forms.
    There are so many links for importing and exporting data from forms to excel and excel to forms.
    But i want to import all column heading of excel to form (no rows). I am using forms 6i.
    please help me for the same.
    Thank You.
    Edited by: sam8682 on Apr 9, 2013 5:48 AM
    Edited by: sam8682 on Apr 9, 2013 5:50 AM

    Reading from any file is always done in a loop through the records. So, just loop only once. That will give you the first record only.
    If you have some code already you can modify it to something like:
    for i in 1..1 loop
    end loop;

  • How can I take text from a webpage that is in multiple rows and move it into a single row in Excel?

    I need help figuring out how to take data from internet pages and enter it into one single row in an excel, or numbers if that is the easier way to go.  I was also told access might be good to use.  Basically I am going to chamber of commerce page and wanting to extract the member listing and enter in a database in a single line.  The  data is in different numbers of lines as you will see below (info edited to take out personal info). So I want to take the  name of business, business owner, address, city, state, zip, and phone and put it into one line on a spreadsheet.  I want to do this many times over. I think there is a way to do it through apple script and automator, but I have not been successful after 2 weeks of trying and searching.  I have over 800 listings and I surely don't want to go through and do them one at a time.  Any suggestions?
    Data from website:
    Westrock Coffee
    Mr.
    Collins Industrial Place
    North Little Rock, AR 72113
    Phone:
    Send Email
    Member Since: 2011
    Sweet Creations by DJ
    Ms. J
    allace Bridge Road
    Perryville, AR 72126
    Phone:
    Fax:
    Send Email
    Member Since: 2013
    See Also Woman Owned and/or CEO
    Premium Refreshment Service
    Mr. E
    est Bethany Road
    North , AR 72117
    I want it to look like this
    Company name, owner name, address, city, state, zip, phone
    How can I get the extra data out of the way and remove the format so that it will go into excel?  Thanks for any help you can provide.  I am not to savvy with code, but I got a friend who is an IT guy that can help.  Thanks again

    So, basically, create 800 individual entries, each one containing everything from business name through the phone (not fax) number, add some commas and spaces to entries, and then put each entry on a separate line?
    1. Go to website page such as this one-- http://www.littlerockchamber.com/CWT/External/WCPages/WCDirectory/Directory.aspx ?ACTION=newmembers --which seems formatwise very close to what you're trying to scrape.
    2. Cmd-A to select all. Cmd-C to copy it to clipboard.
    3. Open freeware TextWrangler. Cmd-V to paste info from clipboard into a blank TW document.
    4. Remove lines from top and bottom so that only membership list remains.
    5. Process lines to remove everything from "Fax" line through "See Also" line. Only business name through phone number will remain in the file.
    --A. TW > Text > Process Lines containing . . .
    -----(check "Delete matched lines"; uncheck all others)
    -----Enter "Send Email" in the search box.
    -----Click Process.
    --B. Repeat 5A for other lines to be removed
    ------Member Since
    ------See Also
    ------Fax
    6. Insert markers to separate entries:
    TW: Search > Find . . .
    ------(check "Wrap around" and "Grep")
    ------in Find box: \r\r\r\r
    ------in replace box: \r***
    ------Click Replace All
    7. Remove remaining blank lines:
    TW: Search > Find . . .
    ------(check "Wrap around" and "Grep")
    ------in Find box: \r\r
    ------in replace box: \r
    ------Click Replace All
    8. Add comma and space at end of each line:
    TW: Search > Find . . .
    ------(check "Wrap around" and "Grep")
    ------in Find box: $
    ------in replace box: ,  (comma space)
    ------Click Replace All
    9. Remove all returns:
    TW: Search > Find . . .
    ------(check "Wrap around" and "Grep")
    ------in Find box: \r
    ------in replace box: (leave blank)
    ------Click Replace All
    10. Insert returns in place of markers:
    TW: Search > Find . . .
    ------(check "Wrap around" and "Grep")
    ------in Find box: \*\*\*,  (backslash asterisk backslash asterisk backslash asterisk comma space)
    ------in replace box: \r
    ------Click Replace All
    11. Remove trailing comma and blank on each line:
    TW: Search > Find . . .
    ------(check "Wrap around" and "Grep")
    ------in Find box: , $ (comma space dollar sign)
    ------in replace box: (leave blank)
    ------Click Replace All
    Import this text file into Excel or Numbers.

  • Importing several CSV files into Excel

    Hello Everyone,
    I managed to piece together this code which runs several powershell scripts that each output a CSV for every day of the week.
    Next, I located a function that takes a CSV file as input and exports that as a worksheet in an Excel document.
    My problem is, the last column in the Excel file should be the first column and I just cannot spot why this behavior is occuring.
    #First phase, output CSV files used later in the script.
    Monday.ps1
    Tuesday.ps1
    Wednesday.ps1
    Thursday.ps1
    Friday.ps1
    Saturday.ps1
    Sunday.ps1
    #Now the function to export the CSV from Phase 1 into an Excel Spreadsheet.
    function Export-Excel {
    [cmdletBinding()]
    Param([Parameter(ValueFromPipeline=$true)][string]$junk)
    begin{
    $header=$null
    $row=1
    process{
    if(!$header){
    $i=0
    $header=$_ | Get-Member -MemberType NoteProperty | select name
    $header | %{$Global:ws.cells.item(1,++$i)=$_.Name}
    $i=0
    ++$row
    foreach($field in $header){
    $Global:ws.cells.item($row,++$i)=$($_."$($field.Name)")
    $xl=New-Object -ComObject Excel.Application
    $wb=$xl.WorkBooks.add(1)
    $Global:ws=$wb.WorkSheets.item(1)
    $Global:ws.Name='Sunday'
    import-csv 'C:\Sunday.csv' | Export-Excel
    $Global:ws=$wb.WorkSheets.Add()
    $Global:ws.Name='Saturday'
    import-csv 'C:\Saturday.csv' | Export-Excel
    $Global:ws=$wb.WorkSheets.Add()
    $Global:ws.Name='Friday'
    import-csv 'C:\Friday.csv' | Export-Excel
    $Global:ws=$wb.WorkSheets.Add()
    $Global:ws.Name='Thursday'
    import-csv 'C:\Thursday.csv' | Export-Excel
    $Global:ws=$wb.WorkSheets.Add()
    $Global:ws.Name='Wednesday'
    import-csv 'C:\Wednesday.csv' | Export-Excel
    $Global:ws=$wb.WorkSheets.Add()
    $Global:ws.Name='Tuesday'
    import-csv 'C:\Tuesday.csv' | Export-Excel
    $Global:ws=$wb.WorkSheets.Add()
    $Global:ws.Name='Monday'
    import-csv 'C:\Monday.csv' | Export-Excel
    $xl.Visible=$true

    That is interesting considering this script I found on Hey Scripting Guy.
    http://blogs.technet.com/b/heyscriptingguy/archive/2010/09/09/copy-csv-columns-to-an-excel-spreadsheet-by-using-powershell.aspx
    I have run your version above and have no issues with order. 
    I still recommend using WorkBook.OpenText($csvfile)
    and
    $wb.Sheets($csvfile).Move($wb2.Sheet(1))
    This is much faster and takes much less code.
    ¯\_(ツ)_/¯
    Could you give a terse example?  I'm trying to put something together but am not having much success.
    $sheets = @(LS D:\Scripts\work | select FullName -ExpandProperty FullName)
    $Excel = New-Object -ComObject excel.application
    $Excel.visible = $false
    $file = $Excel.Workbooks.Open("C:\tmp\mytest.xlsx")
    #$wb2 = $file.Sheets
    for ($i=0; $i -lt $sheets.length; $i++)
    $workbook = $Excel.WorkBooks.OpenText($sheets[$i])
    $workbook.Sheets($sheets[$i]).Move($file.Sheet(1))
    $Excel.visible = $true

  • Problem regarding XML field value to XML generation

    Hi Everybody,
    I am facing a typical problem. I am storing xml in an xml datatype field in a table. But when I am generating physical file from the xml field, '&amp;' in the xml field is getting converted into '&' in the physical file. But I want '&amp;' to
    be intact.
    Can anyone faced the issue or have any idea of solving this?
    Thanks & Regards,
    Anujit Karmakar
    Anujit Karmakar Sr. Software Engineer

    Hi Anujit,
    How do you generate physical file from the xml field?
    I make a test on my computer using your XML data with the following scripts, '&amp;' exists in my exported file.
    DECLARE @testXML XML
    SELECT @TestXML='<ENVELOPE>
    <HEADER>
    <TALLYREQUEST>Import Data</TALLYREQUEST>
    </HEADER>
    <BODY>
    <IMPORTDATA>
    <LEDGER NAME="Prafulla Ghosh">
    <ADDRESS.LIST TYPE="String">
    <ADDRESS>BEHIND DILIP MATH &amp; DISARI CLUB</ADDRESS>
    </ADDRESS.LIST>
    <MAILINGNAME.LIST TYPE="String">
    <MAILINGNAME>Prafulla Ghosh</MAILINGNAME>
    </MAILINGNAME.LIST>
    <STATENAME>West Bengal</STATENAME>
    </LEDGER>
    </IMPORTDATA>
    </BODY>
    </ENVELOPE>'
    -- We Store the contents of an XML variable to a table
    CREATE TABLE MyXMLTable
    xCol XML
    INSERT INTO MyXMLTable ( xCol )
    SELECT @testXML
    We Save an XML value to a file.
    DECLARE @Command VARCHAR(255)
    DECLARE @Filename VARCHAR(100)
    SELECT @Filename = 'C:\Files\TestXMLRoutine'
    /* we then insert a row into the table from the XML variable */
    /* so we can then write it out via BCP! */
    SELECT @Command = 'bcp "select xCol from ' + DB_NAME()
    + '..MyXMLTable" queryout '
    + @Filename + ' -w -T -S' + @@servername
    EXECUTE master..xp_cmdshell @command
    --so now the xml is written out to a file
    SELECT CONVERT(nVARCHAR(max),BulkColumn)
    FROM OPENROWSET(BULK 'C:\Files\TestXMLRoutine', SINGLE_BLOB) AS x
    go
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Can't insert more than 8000 rows to excel via JOB

    Hello, 
    I have SSIS package where I have two tasks.
    File System task copies file (with header names) to destination folder.
    Data flow task export data from table to copied Excel file. Select are simple "select * from table" without any filters.
    There are about 100 000 rows in the table.
    When I run this task through Visual studio everything works fine. Data to Excel file is exported.
    When I run this package in JOB (Job Activity Monitor) the file is created but 0 rows is exported.
    One interesting thing, that when I use select "select top 7000 * from table" 7000 rows is inserted but when I try to export more than 8000 rows then 0 rows is exported. The JOB generates 0 errors. In fact he tries to export. Because all the time
    Excel's file "Date modified" is changing.
    It seems like data is exported but the final "commit" isn't done. Anybody knows where could be a problem?

    Out of curiosity I have had a go at reading 70K lines of data from a table in a sql server and exporting them to an excel 2007 file:
    SSIS 2012 project deployment
    2 project connections: sql database connection for source
    and an Excel 2007 connection for destination:
    Provider=Microsoft.ACE.OLEDB.12.0;Data Source=F:\Data\Outbox\ExcelOut.xlsx;Extended Properties="Excel 12.0 XML;HDR=YES";
    I have an empty excel 2007 file as template with header rows in the first sheet.
    I test the project in VS 2012, ran without problems. 
    I set DelayValidation to true on package level, data flow level.
    In data flow, excel distination, ValidataExternal metadata to False. 
    I deploy the project with the package to an Integration Services catalog on a SQL 2012 server (not my development machine)
    I copy an empty excel 2007 file (with the headers in the first row) in the correct file location.
    I execute the package from SSMS in the Integration services catalogs, with 32 bit checked.
    The package runs without errors. 
    I can open the excel file and see 70K records.
    Last thought: I have no NULL values in my data. Do you have null values after the first 7000 records? Or is one of the excel columns expecting numerical data where it suddenly finds alfanumeric after 7000 rows?
    Another suggestion: if you run the package with SSMS from the catalog, you can set the verbose reporting mode on, does that tell you anything more?
    Jan D'Hondt - SQL server BI development

  • Display values of a single field in a multiple rows in a table region

    Hi Tech-Gurus,
    I want to display values of a single field ( which is in a table region) in multiple rows and also need to restrict the values from decimal number. If i click save, then it will throw exception "Decimal not allowed".
    xxxxxx
    yyyyyy
    Reg.No
    1234
    5678
    7654
    I need to display the values of REG.NO in different rows like,
    1234
    5678
    7654
    and also need to validate as well against Decimal values.
    Please help me with the code how i will iterate ?

    Hi,
    I am assuming you are talking about displaying substrings from the Reg No in different rows. For this you would need to write a query which identifies the substrings and creates a separate row for each (ensure you choose values for all other columns in the table row). Kindly let me know if the understanding is incorrect.
    To validate against decimal value you can use the java code by checking the difference of the number and the number on which modulus has been applied. Hope that helps.
    Regards
    Sumit

  • How to generate a value to field in a row based on another two fields in the same row ???

    I have a quantity and unit price and there is also a field in the same row called total price. I want to get the value automatically of total price from multiplying quantity and unit price.

    Here you go:
    page code
                            <af:panelGroupLayout id="pgl2" layout="horizontal" styleClass="AFStretchWidth">
                                <af:inputText label="Quantity" id="quantity" value="#{bindings.quantity1.inputValue}" autoSubmit="true"
                                              valueChangeListener="#{SetItemBean.valueChangeListenerItem}"/>
                                <af:spacer width="10" height="10" id="s1"/>
                                <af:inputText label="Price" id="price" value="#{bindings.price1.inputValue}" autoSubmit="true"
                                              valueChangeListener="#{SetItemBean.valueChangeListenerItem}"/>
                                <af:spacer width="10" height="10" id="s2"/>
                                <af:inputText label="Sum" id="total" value="#{bindings.total1.inputValue}" partialTriggers="quantity price" readOnly="true"/>
                            </af:panelGroupLayout>
    page bindings
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="12.1.2.66.68" id="SetItemPageDef" Package="de.hahn.blogtest12c.view.pageDefs">
      <parameters/>
      <executables>
        <variableIterator id="variables">
          <variable Name="quantity" Type="java.lang.Number"/>
          <variable Name="price" Type="java.lang.Number"/>
          <variable Name="total" Type="java.lang.Number"/>
        </variableIterator>
      </executables>
      <bindings>
        <attributeValues IterBinding="variables" id="quantity1">
          <AttrNames>
            <Item Value="quantity"/>
          </AttrNames>
        </attributeValues>
        <attributeValues IterBinding="variables" id="price1">
          <AttrNames>
            <Item Value="price"/>
          </AttrNames>
        </attributeValues>
        <attributeValues IterBinding="variables" id="total1">
          <AttrNames>
            <Item Value="total"/>
          </AttrNames>
        </attributeValues>
      </bindings>
    </pageDefinition>
    and bean code
        public void valueChangeListenerItem(ValueChangeEvent valueChangeEvent) {
            String comp = valueChangeEvent.getComponent().getId();
            BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
            // get an ADF attributevalue from the ADF page definitions
            AttributeBinding attr = (AttributeBinding) bindings.getControlBinding("quantity1");
            Number q = (Number) attr.getInputValue();
            if (q == null)
                q = 0;
            attr = (AttributeBinding) bindings.getControlBinding("price1");
            Number p = (Number) attr.getInputValue();
            if (p == null)
                p = 0;
            if ("price".equals(comp)) {
                p = (Number) valueChangeEvent.getNewValue();
            } else {
                q = (Number) valueChangeEvent.getNewValue();
            // set new value
            attr = (AttributeBinding) bindings.getControlBinding("total1");
            Number t = q.doubleValue() * p.doubleValue();
            attr.setInputValue(t);
            // update Total
            FacesContext facesCtx = FacesContext.getCurrentInstance();
            UIComponent ui = facesCtx.getViewRoot().findComponent("total");
            if (ui != null) {
                // PPR refresh a jsf component
                AdfFacesContext.getCurrentInstance().addPartialTarget(ui);
    Timo

  • SQL query Output  exceeds 65000 rows in excel

    Hi,
    I have a SQL file attached to my concurrent program. The query in the SQL file returns tab separated output that i need to email as a csv through the same SQL file.
    The issue I am facing is that the query returns about 157000 records but only 65536 are being displayed in the excel worksheet.
    How can I get all the records by accommodating them in multiple worksheets of the same excel file?
    Please help!
    I am using EBS 11i.
    Code I am using in SQL file:
    set pages 3000
    set lines 300
    set heading off
    set term off
    set feedback off
    set verify off
    set echo off
    set serverout off
    spool /tmp/TEST.csv
    select 'Header'
    from dual
    SELECT column1||CHR(9)||column2
    FROM staging_table
    spool off
    set lines 1000 pages 40 head off echo off veri off
    select 'cd /tmp; cat $XX_TOP/install/sql/disclaimer.txt > x.dat; cat '||'/tmp/TEST.csv'||' | uuencode '||'/tmp/TEST.csv'
    ||' > '||substr('/tmp/TEST.csv',1,length('/tmp/TEST.csv')-4)||'.dat >> x.dat'
    ||' ; mail -s ''' ||'&1'||''' '||'&2'||' < x.dat'
    ||' ; rm '||substr('/tmp/TEST.csv',1,length('/tmp/TEST.csv')-4)||'.dat'||' ;'
    from dual
    list
    spool /tmp/email.txt
    spool off
    host chmod 777 /tmp/email.txt
    host /tmp/email.txt
    Thanks
    Edited by: user10648285 on May 13, 2011 5:27 AM

    open a blank Excel (2007 version), go to Data tab, then from External Data click From Text, select the CSV file (That having more than 65k records), click import, then from "start import at row" select 65001 and click next button, select delimiter as Comma, click Next button and filaly finish.
    In this way you will have the data after 65k rows in excel. Now you can simply open that cvs file in excel then you will have first 65k records (remening part will be discarded).
    So now you have two excel files and you can paste it in two diff tab.
    Note - you can repeate this activity if you have more than 130000 records...

  • Gui_Download export to excel delivering only one row in excel output

    Hello,
    I am using Vista Os with the SAP front end logon 710 and Office 2007. Also my company is using Ecc6.0 in the server.
    I am able to export the Itab data to excel but all the rows are on the first row in Excel.
    If i have 3 rows in ITAB then all the 3 rows are on the First row of Excel
    I have supplied these parameters
    filename = 'C:\ITAB2XLS.xls'
    FILETYPE = 'ASC'
    WRITE_FIELD_SEPARATOR = 'X'
    SHOW_TRANSFER_STATUS = 'X'
    tables
    data_tab = itab[]
    FIELDNAMES = headingTab[]
    Is anyone having the same issue like me ?
    Thanks.

    Hi,
    I tried commenting the Field seperator but it didnt change anything. I am still having the same problem.
    I am pasting my code sample.
    data:begin of itab occurs 0,
    grp type c,
    val type i,
    end of itab.
    data:begin of headingTab occurs 0,
    TEXT(10) type c,
    end of headingTab.
    itab-grp = 'A'.
    itab-val = 100.
    append itab.
    itab-grp = 'B'.
    itab-val = 200.
    append itab.
    headingTab-text = 'GROUP'.
    append headingTab.
    headingTab-text = 'VALUE'.
    append headingTab.
    call function 'GUI_DOWNLOAD'
    exporting
    filename = 'C:\ITAB2XLS.xls'
    FILETYPE = 'DAT'
    WRITE_FIELD_SEPARATOR = 'X'
    SHOW_TRANSFER_STATUS = 'X'
    tables
    data_tab = itab[]
    FIELDNAMES = headingTab[]
    I think there is some problem with the Vista / Office 2007 with SAP.
    Any suggestions?

  • Separate tab in excel sheet

    Hi All,
    I am writing a report to get data related to material master and download it to excel sheet.
    While downloading I need to use a separate tab in excel sheet for each area of the material master, for example, a tab for Basic 1, Basic 2, MRP1, etc…
    Regards,
    Gaurav

    Hi,  tyr this..
    while u r priting ur output give space between the feilds.
    for example....... u can write ur required field inplace of "text-000".
    WRITE:/2  text-002
            16  text-003 ,
            25  text-004 ,
            40  text-005 ,
            60  text-006 ,
            75  text-007 ,
            90  text-008 ,
           100 text-009 .

  • Ignore 2nd row and 4th row in Excel Sheet in SSIS Package

    Hi All,
    I have an SSIS package that imports an Excel sheet in which i have to ignore 2nd row and 4th row.
    Please help me on this issue.

    Hi ShyamReddy,
    Based on my test, if second and fourth rows need to be skipped is based on some conditions, we can directly add where conditions in only one Excel Source with Edit Option. Otherwise, we can try to union three Excel sources to work around this issue. For
    more details, please refer to the following steps:
    Set the FirstRowHasColumnName property to False, so the first row stores the column names in the sheet.
    Drag three Excel Sources to the Data Flow Task.
    In the Excel Source, use the SQL command below to replace the former(supposing there are three columns in the Excel sheet: col1, col2 and col3):
    SELECT F1 AS col1,F2 AS col2, F3 AS col3  FROM
    [sheet$A2:C2]
    In the Excel Source 1, please type the SQL command below:
    SELECT F1 AS col1,F2 AS col2, F3 AS col3  FROM
    [sheet$A4:C4]
    In the Excel Source 2, please type the SQL command below (note that the ‘n’ means the number of rows in the sheet):
    SELECT F1 AS col1,F2 AS col2, F3 AS col3  FROM
    [sheet$A6:Cn]
    Drag a Union All component to the same task, then union those three Excel Sources.
    References:
    SSIS Excel import skip first rows
    sql command for reading a particular sheet, column
    Hope this helps.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

Maybe you are looking for

  • DO NOT BUY AN IPHOTO BOOK!!!!!!!

    I have been going at it with Apple now for over a month! I ordered a book and it came back with blank pages and with photos not scaled properly. (This was my second iPhoto book. The first one also had several minor printing flaws which we overlooked)

  • How do I get Word Styles to come over to PDF as Bookmarks?

    Using MS 2010 Word with Adobe X......When creating a PDF from Word, I use bookmarks and styles from the word document.  Only the Bookmarks (Headings) come over .....  NOT the styles (i.e., Figure and Table Titles, and all other pertinent styles that

  • Aliases of members are not reflected at dimension

    Hi guys I was facing a problem using table interfaces to load information. When the member that has aliases are imported it does not reflect the alias on source system. Any help will be appreciated.

  • Fully embedding fonts in PDF

    Hi, I had a piece of information but I can't seem to find it anymore so I have to ask ... This is about understanding limitations of fully embedding fonts in a PDF (from Indesign Server). Actually it's about one limitation in particular. I know fonts

  • 'Preview unavailable' on LR4 import, but not in LR3!

    I get 'preview unavailable for this file' when attempting to import files from my Panasonic Lumix DMC LX3 - even though the same images show up fine on import preview for Lightroom 3!?? I have Lightroom 3 AND 4 on my machine at the minute, running OS