Merging Data from CSV

I have a Numbers document with addresses that I want to use to create a mail merge. How can I do this in Pages 08

jayscott wrote:
I have a Numbers document with addresses that I want to use to create a mail merge. How can I do this in Pages 08
Reading the User Guide delivered with the application (PDF file) would be a good starting point.
Doing that, you will learn that in iWork '08 the only source of data for a Merge task is AddressBook.
Yvan KOENIG (VALLAURIS, France) samedi 18 septembre 2010 10:39:50

Similar Messages

  • Can we merge data from multiple sources in Hyperion Interactive Reporting ?

    Hi Experts,
    Can we merge data from multiple sources in Hyperion Interactive Reporting ?Example can we have a report based on DB2
    Oracle,
    Informix and Multidiemnsional Databases like DB2,MSOLAP,ESSBASE?
    Thanks,
    V

    Yes, Each have their own Query and have some common dimension for the Results Sections to be joined together in a final query.
    look in help for Creating Local Joins

  • Loading data from .csv file into Oracle Table

    Hi,
    I have a requirement where I need to populate data from .csv file into oracle table.
    Is there any mechanism so that i can follow the same?
    Any help will be fruitful.
    Thanks and regards

    You can use Sql Loader or External tables for your requirement
    Missed Karthick's post ...alredy there :)
    Edited by: Rajneesh Kumar on Dec 4, 2008 10:54 AM

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • How to load the data from .csv file to oracle table???

    Hi,
    I am using oracle 10g , plsql developer. Can anyone help me in how to load the data from .csv file to oracle table. The table is already created with the required columns. The .csv file is having about 10lakh records. Is it possible to load 10lakh records. can any one please tell me how to proceed.
    Thanks in advance

    981145 wrote:
    Can you tell more about sql * loader??? how to know that utility is available for me or not??? I am using oracle 10g database and plsql developer???SQL*Loader is part of the Oracle client. If you have a developer installation you should normally have it on your client.
    the command is
    sqlldrType it and see if you have it installed.
    Have a look also at the FAQ link posted by Marwin.
    There are plenty of examples also on the web.
    Regards.
    Al

  • Error while load the data from CSV with CTL file..?

    Hi TOM,
    When i try to load data from CSV file to this table,
    CTL File content:
    load data
    into table XXXX append
         Y_aca position char (3),
         x_date position date 'yyyy/mm/dd'
    NULLIF (x_date = ' '),
    X_aca position (* + 3) char (6)
    "case when :Y_aca = 'ABCDDD' and :XM_dt is null then
    decode(:X_aca,'AB','BA','CD',
    'DC','EF','FE','GH','HG',:X_aca)
    else :X_aca
    end as X_aca",
    Z_cdd position char (2),
         XM_dt position date 'yyyy/mm/dd'
    NULLIF XM_dt = ' ',
    When I try the above CTL file; geting the following error..
    SQL*Loader-281: Warning: ROWS parameter ignored in parallel mode.
    SQL*Loader-951: Error calling once/load initialization
    ORA-02373: Error parsing insert statement for table "XYZ"."XXXX".
    ORA-00917: missing comma

    Possible Solutions
    Make sure that the data source is valid.
    Is a member from each dimension specified correctly in the data source or rules file?
    Is the numeric data field at the end of the record? If not, move the numeric data field in the data source or move the numeric data field in the rules file.
    Are all members that might contain numbers (such as "100") enclosed in quotation marks in the data source?
    If you are using a header, is the header set up correctly? Remember that you can add missing dimension names to the header.
    Does the data source contain extra spaces or tabs?
    Has the updated outline been saved?

  • Merging data from 2 schemas with same object structure into one schema

    Hi
    I want to merge data from 2 schemas in different environments ( say test1 and test 2) into 1 schema ( say Test_final) for testing. Both these schemas are having same structure, the data can be same or different.
    What I did is that I took an export of schema on Test1 and then import it into Test_final. Now I need to merge/append data from Test2 into Test_final.
    I can not merge the data with import due to primary key constraints and also import doesnt support this feature, so I tried SQL*Loader to "append" the data by using sequence to generate Primary key.
    But my worries are that since new primary keys are generated so foreign keys will become invalidated and the data will not be consistent.
    Is there any other way to do this task..
    Regards
    Raman

    This approach might be better...
    create table test_final
    as
    select *
    from schema1.test1
    insert into test_final
    select t2.* from schema2.test1 ,  test_final tf
    where t2.pk != tf.pk
    /...assuming duplicate primary keys mean duplicate records. If that assumption is not the case then you have a more complex data migration exercise on your hands and you need to figure out some rules to determine which version of the data takes precedence.
    Cheers, APC

  • Import data from csv to SQL database

    How to  Import data from csv to SQL database through BLS-xMII?

    Hi,
    if you using Oracle, you may also use the bulk insert with MII. You can use the String List to XML parser to convert your CSV to XML, and then use a query that calls a stored procedure which does the bulk insert.
    Also have a look at thread [read XML file into stored procedure|http://forums.sdn.sap.com/click.jspa?searchID=63850236&messageID=7455643].
    Michael

  • Loading data from CSV to Unix database

    Hi All
    We had copied CSV into UNIX box and tried to upload data from CSV to oracle. We are able to load data from CSV's but some CSV are loading perfectly, but are loading extra character into the column.
    Even I tried by putting the CSV files in windows and load the data into UNIX database, I am facing the same problem.
    But if I use the same CSV's and load data in Windows database, It is working fine.
    Can anybody suggest me the solution.
    Regards,
    Kumar.

    ... oh, what a confusion. I still answerded in the ittoolbox group:
    "Hi,
    the problem are the different character sets in the windows and in the unix environment. So some of the characters are missinterpreted.
    Is the database where you first loaded the csv's from a unix box also on unix? How did you copied the files? Via ftp? I hope in ascii mode? "
    Regards,
    Detlef

  • How do I merge data from table1 on server 1 to final table on server 2 with a stored procedure to execute every so many hours.

    How do I merge data from table1 on server 1 to final table on server 2 with a stored procedure to execute every so
    many hours.

    How big is the table on server B? Is that possible to bring the all data into a server A and merge the data locally?
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • How do I merge data from table1 on server 1 to final table on server 2 with a stored procedure to execute every 4 hours.

    How do I merge data from table1 on server 1 to final table on server 2 with a stored procedure to execute every so many hours.

    Hello,
    If you had configure server2 as
    linked server on the server1, you can run the following statement inside stored proceduce to copy table data. And then create a job to the run stored proceduce every 4 hours.
    Insert Into Server2.Database2.dbo.Table2
    (Cols)
    Select Cols From Server1.Database1.dbo.Table1
    Or you can use the SQL Server Import and Export Wizard to export the data from server1 to server2, save the SSIS package created by the wizard on the SQL Server, create a job to run the SSIS package. 
    Reference:http://technet.microsoft.com/en-us/library/ms141209.aspx
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Error when executing interface which load data from csv file which has 320

    Hi,
    Can some one provide a resolution for below error:
    I have created an interface which load data from csv file which has 320 columns, to a Synonym which has 320 columns in it
    using LKM File to SQL, IKM Sql Control Append.
    I am getting below error when executing the interface :
    com.sunopsis.tools.core.exception.SnpsSimpleMessageException: ODI-17517: Error during task interpretation. Task: 6 java.lang.Exception: BeanShell script error: Sourced file: inline evaluation of: ``out.print("The application script threw an exception: java.lang.StringIndexOutOf . . . '' Token Parsing Error: Lexical error at line 2, column 42. Encountered: "\\" (92), after : "": <at unknown location> BSF info: Create external table at line: 0 column: columnNo
    at com.sunopsis.dwg.codeinterpretor.SnpCodeInterpretor.transform(SnpCodeInterpretor.java:485)
         at com.sunopsis.dwg.dbobj.SnpSessStep.createTaskLogs(SnpSessStep.java:711)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:461)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: org.apache.bsf.BSFException: BeanShell script error: Sourced file: inline evaluation of: ``out.print("The application script threw an exception: java.lang.StringIndexOutOf . . . '' Token Parsing Error: Lexical error at line 2, column 42. Encountered: "\\" (92), after : "": <at unknown location>
    BSF info: Create external table at line: 0 column: columnNo
         at bsh.util.BeanShellBSFEngine.eval(Unknown Source)
         at bsh.util.BeanShellBSFEngine.exec(Unknown Source)
         at com.sunopsis.dwg.codeinterpretor.SnpCodeInterpretor.transform(SnpCodeInterpretor.java:471)
         ... 11 more
    Text: The application script threw an exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 2 BSF info: Create external table at line: 0 column: columnNo
    out.print("createTblCmd = r\"\"\"\ncreate table ") ;
    out.print(odiRef.getTable("L", "COLL_NAME", "W")) ;
    out.print("<?=(extTabColFormat.getUseView())?\"_ET\":\"\"?>\n(\n\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\t"+
              "<?=extTabColFormat.getExtTabDataType(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022[DEST_WRI_DT]\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
         , ",\\n\\t", "","")) ;
    out.print("\n)\nORGANIZATION EXTERNAL\n(\n\tTYPE ORACLE_LOADER\n\tDEFAULT DIRECTORY dat_dir\n\tACCESS PARAMETERS\n\t(\n\t\tRECORDS DELIMITED BY 0x'") ;
    out.print(odiRef.getSrcTablesList("[XFILE_SEP_ROW]","")) ;
    out.print("'\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_CHARACTERSET")) ;
    out.print("\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_STRING_SIZE")) ;
    out.print("\n\t\tBADFILE\t\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.bad'\n\t\tLOGFILE\t\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.log'\n\t\tDISCARDFILE\t'") ;
    out.print(odiRef.getSrcTablesList("", "[RES_NAME]", "", "")) ;
    out.print("_%a.dsc'\n\t\tSKIP \t\t") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_FIRST_ROW]", "", "")) ;
    out.print("\n") ;
    if (odiRef.getSrcTablesList("", "[FILE_FORMAT]", "", "").equals("F")) {out.print("\n\t\tFIELDS\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_MISSING_FIELD")) ;
    out.print("\n\t\t(\n\t\t\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\tPOSITION([FILE_POS]:[FILE_END_POS])\\t"+
                        "<?=extTabColFormat.getExtTabFormat(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022DEST_WRI_DT\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
                        , ",\\n\\t\\t\\t", "","")) ;
    out.print("\t\t\n\t\t)\n\t)\n") ;
    } else {out.print("\n\t\tFIELDS TERMINATED BY x'") ;
    out.print(odiRef.getSrcTablesList("", "[XFILE_SEP_FIELD]", "", "")) ;
    out.print("'\n\t\t") ;
    if(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").equals("")){out.print("\n\t\t") ;
    } else {out.print("OPTIONALLY ENCLOSED BY '") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(0,1)) ;
    out.print("' AND '") ;
    out.print(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(1,2)) ;
    out.print("' ") ;
    }out.print("\n\t\t") ;
    out.print(odiRef.getUserExit("EXT_MISSING_FIELD")) ;
    out.print("\n\t\t(\n\t\t\t") ;
    out.print(odiRef.getColList("", "[CX_COL_NAME]\\t"+
                        "<?=extTabColFormat.getExtTabFormat(\\u0022[CX_COL_NAME]\\u0022,\\u0022[SOURCE_DT]\\u0022, \\u0022DEST_WRI_DT\\u0022, \\u0022[COL_FORMAT]\\u0022, \\u0022[BYTES]\\u0022, \\u0022[LONGC]\\u0022, \\u0022[SCALE]\\u0022)?>"
                        , ",\\n\\t\\t\\t", "","")) ;
    out.print("\t\t\n\t\t)\n\t)\n") ;
    }out.print("\tLOCATION (") ;
    out.print(odiRef.getSrcTablesList("", "'[RES_NAME]'", "", "")) ;
    out.print(")\n)\n") ;
    out.print(odiRef.getUserExit("EXT_PARALLEL")) ;
    out.print("\nREJECT LIMIT ") ;
    out.print(odiRef.getUserExit("EXT_REJECT_LIMIT")) ;
    out.print("\n\"\"\"\n \n# Create the statement\nmyStmt = myCon.createStatement()\n \n# Execute the trigger creation\nmyStmt.execute(createTblCmd)\n \nmyStmt.close()\nmyStmt = None\n \n# Commit, just in case\nmyCon.commit()") ;
    ****** ORIGINAL TEXT ******
    createTblCmd = r"""
    create table <%=odiRef.getTable("L", "COLL_NAME", "W")%><?=(extTabColFormat.getUseView())?"_ET":""?>
         <%=odiRef.getColList("", "[CX_COL_NAME]\t"+
              "<?=extTabColFormat.getExtTabDataType(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022[DEST_WRI_DT]\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
         , ",\n\t", "","")%>
    ORGANIZATION EXTERNAL
         TYPE ORACLE_LOADER
         DEFAULT DIRECTORY dat_dir
         ACCESS PARAMETERS
              RECORDS DELIMITED BY 0x'<%=odiRef.getSrcTablesList("[XFILE_SEP_ROW]","")%>'
              <%=odiRef.getUserExit("EXT_CHARACTERSET")%>
              <%=odiRef.getUserExit("EXT_STRING_SIZE")%>
              BADFILE          '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.bad'
              LOGFILE          '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.log'
              DISCARDFILE     '<%=odiRef.getSrcTablesList("", "[RES_NAME]", "", "")%>_%a.dsc'
              SKIP           <%=odiRef.getSrcTablesList("", "[FILE_FIRST_ROW]", "", "")%>
    <% if (odiRef.getSrcTablesList("", "[FILE_FORMAT]", "", "").equals("F")) {%>
              FIELDS
              <%=odiRef.getUserExit("EXT_MISSING_FIELD")%>
                   <%=odiRef.getColList("", "[CX_COL_NAME]\tPOSITION([FILE_POS]:[FILE_END_POS])\t"+
                        "<?=extTabColFormat.getExtTabFormat(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022DEST_WRI_DT\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
                        , ",\n\t\t\t", "","")%>          
    <%} else {%>
              FIELDS TERMINATED BY x'<%=odiRef.getSrcTablesList("", "[XFILE_SEP_FIELD]", "", "")%>'
              <% if(odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").equals("")){%>
              <%} else {%>OPTIONALLY ENCLOSED BY '<%=odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(0,1)%>' AND '<%=odiRef.getSrcTablesList("", "[FILE_ENC_FIELD]", "", "").substring(1,2)%>' <%}%>
              <%=odiRef.getUserExit("EXT_MISSING_FIELD")%>
                   <%=odiRef.getColList("", "[CX_COL_NAME]\t"+
                        "<?=extTabColFormat.getExtTabFormat(\u0022[CX_COL_NAME]\u0022,\u0022[SOURCE_DT]\u0022, \u0022DEST_WRI_DT\u0022, \u0022[COL_FORMAT]\u0022, \u0022[BYTES]\u0022, \u0022[LONGC]\u0022, \u0022[SCALE]\u0022)?>"
                        , ",\n\t\t\t", "","")%>          
    <%}%>     LOCATION (<%=odiRef.getSrcTablesList("", "'[RES_NAME]'", "", "")%>)
    <%=odiRef.getUserExit("EXT_PARALLEL")%>
    REJECT LIMIT <%=odiRef.getUserExit("EXT_REJECT_LIMIT")%>
    # Create the statement
    myStmt = myCon.createStatement()
    # Execute the trigger creation
    myStmt.execute(createTblCmd)
    myStmt.close()
    myStmt = None
    # Commit, just in case
    myCon.commit().
         at com.sunopsis.dwg.dbobj.SnpSessStep.createTaskLogs(SnpSessStep.java:738)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:461)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)

    The issue is encountered because the text delimiter used in the source file did not consist of a pair of delimiters.
    Please see support Note [ID 1469977.1] for details.

  • Merging data from multiple BAPI tables

    Hello,
    I'm executing a BAPI_XX_GETLIST with multiple "Tables" selected with the REQUESTEDTABLESX import parameter. What is the best way to combine/merge data from multiple "Tables" and bind them into a single webdynpro UI Table?
    Thanks.

    Hi,
    Lets say the BAPI name is BAPI_XX_GETLIST and the table names are ET_TABLE1(Field1, Field2, Field3)  and ET_TABLE2 (FieldA, FieldB). You have display the fields of ET_TABLE1 (Field1, Field2, Field3) and one field from ET_TABLE2 (FieldB). "FieldA" id the key field.
    Create a value node (vnField) under the model node table ET_TABLE1 with cardinality 1:1. Create a value attribute (vaFieldB). For "vnField" create a supply function (supplyfunction). If you goto the implementation you can see a method "supplyVnField(IPrivateCO_ComponentName.IVnFieldNode node, IPrivateCO_ComponentName.IET_Table1Element parentElement)".
    Write the following code in the implementation.
    IPrivateCO_ComponentName.IVnFieldNode nodeX = node.createValueFieldElement();
    for(int i=0;i<wdContext.nodeET_Table2().size();i++){
    if(parentElement.getField3Value().equals(wdContext.nodeET_Table2().getET_Table2ElementAt(i).getFieldA())){
       nodeX.setValuAttributeValue(wdContext.nodeET_Table2().getET_Table2ElementAt(i).getFieldB());
    node.bind(nodeX);
    Hope this solves your problem.
    Regards,
    Santhosh.C

  • Merging data from SAP BEx queries with SQL - Keys are details not Dimension

    I have a challenge when trying to merge data from a BEx query and a relational source in SQL Server.
    I have a characteristic for Material, with an associated Material key that is a attribute of Material in the BEx query.  On the SQL side I have a Material ID which is they unique identifier.
    So when you Merge Dimensions in a WebI report, it is exactly that merging dimensions,  so you link Material from BEx and Material ID from SQL.   However, on the Bex side Material displays a long text field which will never join to the data from SQL which is an ID.  The like for like objects are Material Key in BEx and Material ID in SQL, however, as Material Key from BEx is an attribute to manifests itself in WebI via a universe as a Detail object, which makes it unavailable for merging.
    I have tried to set the Material characteristic to display as KEY in the BEx query design, but alas it still comes through as a long text, hence still not able to merge data sets.
    Any workarounds ??
    Andrew

    Hi Andrew,
    In universe designer, edit the material key detail object. Copy the text that refers to the characteristic attribute in your BEx query. Create a new dimension (say, 'Material Key') and paste this text as the definition.
    Essentially you're turning a detail object into a dimension, which (in my limited experience) works just fine.
    Let me know how you go.
    DG

  • " (quotes) is column values when loading data from csv to oracle table

    I am loading a data from csv file to an oracle table.
    After the data is loaded , lets suppose the value in one column is X.
    when i write a query to fetch data like
    select * from table where col='X' then it gives no output.
    On investigating it was found that the value is stored in col as "X".
    This also happens when i copy paste the value from column to some text editor ....
    i want to remove these double quotes , and also want to know why these are coming...
    Any suggestions guys ?
    Thanks
    Using Oracle 10g

    These quotes are part of your data file. Most CSV-parsers remove those quotes.
    If you are using an external table you can remove them by something like this for your access parameters
    ORGANIZATION EXTERNAL
      TYPE ORACLE_LOADER
      ACCESS PARAMETERS
        RECORDS DELIMITED BY NEWLINE
        FIELDS TERMINATED BY ','
        OPTIONALLY ENCLOSED BY '"'
    But if you want to remove them from your existing data use this
    update your_table
      set col1 = trim( '"' from col1 )
    Note that in some CSV-files wich use an enclosing character the enclosing character is itself is used as an escape character to use the enclosing character inside a field.
    For example a field might be "Some text with ""this"" enclosed"
    To correct these fields you might use another update
    update your_table
      set col1 = replace( col1, '""', '"' )

Maybe you are looking for

  • Missing mail folders

    Installed SL and apart from a couple of things that were solved by the help of comments here everything is great. The only thing that I cant work out is that on my hotmail account I have various folders with rules that used to show up in the mail app

  • FILE-MENU refuses to open 10% of the time. Of that about 1/3 can be opened with Alt-F. Not website related.

    Sorry for the ramble: Clean installed FFox 14.0.1 - 2 days ago. Installed all plug-ins, extensions etc just as I had in FFox 3.6.28. All good so far. Played around a LOT til got buttons, tabs etc looking-going lovely. Since then 90% of time "File Men

  • MySQL connectivity problem: no validation table provided

    hi there, As a superbeginner, on my Mac G4 OSX 10.4 I am trying to add a MySQL database in Sun JSC 2.1, following the tutorial: Creating Database Server Types and Data Sources So far I have managed to -install mysql 3.1.14 and -add this as a new data

  • How to edit phone format numbers (not Verizon user)?

    Hi folks, I own an iPhone 3GS and it is set to General > International > Region format = Brazil. Well, we had some changes in our state that they increased a prefix 9 in all cell numbers. So, it is now 9xxxx-xx-xx, but when I add it, It goes wrong as

  • Batch Input / Recording for EnjoySAP transaction

    Hi Experts, May I know any possibility for us to do recording/batch input processing for EnjoySAP transaction ?? Nowadays I need to do Recording for ME29N, as ME28 have some problem on release some of the PO document, so may I know any tool that I mi