Data Export Question

Hi Experts,
Is there a way to use the data export function to create a data extract file that won't put double quotes around the members?
Thank You!

If extracting to a flat file, I don't think so. If you extract to a relational source it does not include the quotes.

Similar Messages

  • Simple Data Export Question

    Hello,
    I am testing the process-feature of the Livecycle WorkBench.
    However, I am facing some difficulties...
    As a matter of test, I wanted to create a simple process which would:
    Fetch an email attachement, an interactive PDF form
    Export the XML data from the form
    send the XML data by email
    I manage to do step 1, but as soon as I get to step 2, I have the following problem:
    I have the PDF form stored in a variable, of type document, by I cannot use this variable to assign in as input to the "export data" component...
    The export data component seems to expect an asset...
    If I run my process, with the process as input (in the URI format), I get an input exception saying that my PDF is not in a correct format...
    I thought that this test-process would be a simple one
    Could someone show me how to achieve this in a process? Thanks a lot!

    Please seek help from various services samples available here : http://help.adobe.com/en_US/livecycle/9.0/samples/lc_sample_service.html
    Here is the link to all the samples : http://www.adobe.com/devnet/livecycle/samples.html
    Hope this helps.
    Thanks,
    Wasil

  • Data export(ttbulkcp) Oracle TimesTen Question

    I'm trying to export an Oracle TimesTen(TimesTen Release 11.2.1.5.0) with ttbulkcp(Data export), using SQL Developer Version 2.1.1.64
    All other functions are normal operation. Also ttbulkcp(Data export) in Oracle Table is normal operation. But , I get the following error in ttbulkcp(Data export) in TimesTen Table.
    java.lang.NullPointerException
         at oracle.dbtools.raptor.format.ResultsFormatterWrapper.getColumnCount(ResultsFormatterWrapper.java:67)
         at oracle.dbtools.raptor.format.ResultsFormatter.getColumnCount(ResultsFormatter.java:130)
         at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.getColumns(TimesTenLoaderFormatter.java:207)
         at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.printColumnData(TimesTenLoaderFormatter.java:183)
         at oracle.dbtools.raptor.timesten.format.TimesTenLoaderFormatter.start(TimesTenLoaderFormatter.java:73)
         at oracle.dbtools.raptor.format.ResultSetFormatterWrapper.print(ResultSetFormatterWrapper.java:150)
         at oracle.dbtools.raptor.format.ResultsFormatter.print(ResultsFormatter.java:200)
         at oracle.dbtools.raptor.format.ResultsFormatter.doPrint(ResultsFormatter.java:416)
         at oracle.dbtools.raptor.dialogs.actions.TableExportAction$5.doWork(TableExportAction.java:637)
         at oracle.dbtools.raptor.dialogs.actions.TableExportAction$5.doWork(TableExportAction.java:634)
         at oracle.dbtools.raptor.backgroundTask.RaptorTask.call(RaptorTask.java:193)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$RaptorFutureTask.run(RaptorTaskManager.java:492)
         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:619)
    Driver not capable
    Thank you.
    GooGyum

    If you have a DB support contract, I suggest you open a SR on Metalink/MOS, to get official response and follow-up, else you can hope someone from development picks it up here.
    Regards,
    K.

  • Automatise data export from website

    Hello,
    I stuck with an issue regarding data export from scopus.com. I have 61 keywords for which I want to export (as csv) number of documents by year, country, document type, subject area. Doing that by hand would be extremely tedious. Thus my question if this can be automatised. Unfortunately scopus does not provide an API to access the data directly.
    So, the process to be automatised would be:
    1. Open http://www.scopus.com/home.url
    2. Enter the keyword in "search for"
    3. Click on "search"
    4. Click on "Analyze results"
    5. Click on tab "year" and "export". Save the file
    6. Click on tab "country" and "export". Save the file
    6. Click on tab "document type" and "export". Save the file
    6. Click on "subject area" and "export". Save the file
    Do programs exist that help me retrieving the data?
    Thanks!

    You could achieve your goal with bash and wget with some combination of --save-cookies --post-data.
    But be aware that scopus is run by Elsevier, whom many view as the most evil of the publishing companies. Think twice before doing something that may annoy them.

  • Data export BR execution Time

    Hi all,
    data export Br is taking more than 13 hours for its execution and after its execution i got an exported file which has only 7000 rows.
    As we have data only at 4 members out of 783 mbrs of a perticular dimension but BR is running for all members.My question is if the data size increase i.e if we get data at all 783 members then time execution of the same BR will be same i.e 13 hours because we are executing BR for the same number of combinations or will it take more than 13 hours.
    Thanks.

    Hi all,
    data export Br is taking more than 13 hours for its execution and after its execution i got an exported file which has only 7000 rows.
    As we have data only at 4 members out of 783 mbrs of a perticular dimension but BR is running for all members.My question is if the data size increase i.e if we get data at all 783 members then time execution of the same BR will be same i.e 13 hours because we are executing BR for the same number of combinations or will it take more than 13 hours.
    Thanks.

  • XML Multiple Data Export/Import into New Form

    Is it possible to import an XML file with multiple form data (exported via "Merge data files into spreadsheet" option, then saved as XML) into a "template" form and create individual forms from the multiple data sheet? In other words, I've merged 65 forms' data into an XML file. Now I'd like to import it all back into an updated form.
    What I've been doing now is exporting the XML data individually for each form and importing each form individually into the new form.
    One option is to extend rights to the user to import nd export themselves, but I'm still looking into the Formrouter service, which, if implemented, won't be for a while.
    Any solutions to this painful process?
    Thanks - Derek
    I just realized this may be a question for a different forum...Acrobat... My apologies.

    Hi Derek,
    Without the LC Enterprise server product(s) I don't think you will be able to achieve this. Acrobat.com give a mechanism of distributing the form, I am fairly sure it will allow you to view the responses in a new form.
    Also applying reader extensions to the form with Acrobat will not help, as this removes the ability of importing/exporting XML. See https://acrobat.com/#d=3lGJZAZuOmk8h86HCWyJKg. If you are extending rights with LC Reader Extensions ES then this restriction should not apply.
    If you have the 65 XML responses, I would be inclined to bite the bullet and manually import the XML into the new form.
    Good luck,
    Niall

  • EXP-00091: Exporting questionable statistics

    Dear Friend,
    I am using Linux Database and Windows database, the problem now I am facing while exporting data from Linux which is imported from windows oracle database dump file.
    when I am exporting I am getting this error
    EXP-00091: Exporting questionable statistics
    and shows
    Export terminated successfully with warnings
    Now exported file I am importing from Linux to windows and getting this error
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Import file: EXPDAT.DMP > c:/aa.dmp
    Enter insert buffer size (minimum is 8192) 30720>
    IMP-00010: not a valid export file, header failed verification
    IMP-00000: Import terminated unsuccessfully
    Regards,
    Oracle User

    Hi,
    >>because I exported dump from oracle10G
    Then you use the exp 10g version, right ? To move data DOWN a version(s), you need to export using that lower versions EXP tool and using IMP that lower version tool.
    Take a look at this below?
    Exporting Data From Release 10.2 and Importing Into Earlier Releases
    Export From      Import To      Use Export Utility For      Use Import Utility For
    Release 10.2      Release 10.2      Release 10.2           Release 10.2
    Release 10.2      Release 10.1      Release 10.1           Release 10.1
    Release 10.2      Release 9.2      Release 9.2           Release 9.2
    Release 10.2      Release 9.0.1      Release 9.0.1           Release 9.0.1
    Release 10.2      Release 8.1.7      Release 8.1.7           Release 8.1.7
    Release 10.2      Release 8.0.6      Release 8.0.6           Release 8.0.6
    >>EXP-00091 is still remain.. you said I have to chang ethe NLS but I am new to linux
    I think that there is no problem ... but you can use also as I said before the statistics=none clause, but remember that you need to use the exp 9i version ...
    Cheers

  • Endeca data export - Single catalog only

    With ATG 10.1.2, is there a way to configure the Endeca data export / Baseline Index process (as run from the ProductCatalogSimpleIndexingAdmin component in dyn admin) to generate data for products and categories in a particular catalog only, instead of for all catalogs?
    Thanks!
    -Jon
    Edited by: Jon Schneider on Apr 9, 2013 5:43 AM

    cmore, thank you, that's very helpful!
    I was looking into this question as a possible solution to an issue where when running the baseline import process as part of the ATG-Endeca integration to upload data from ATG to Endeca, catalog data was not being generated properly, and I was suspicious that part of the problem might be that my application has multiple catalogs.
    The problem turned out to be due to a missing default catalog configuration for my application's site, as described in my posts in this thread: Re: Baseline Index error (invoking from ProductCatalogSimpleIndexingAdmin)
    Since CategoryToDimensionOutputConfig doesn't have a value for the repositoryItemGroup property by default, and it looks like it would be a bit of work to get that set up properly and I no longer have a pressing need to do that, I'm setting this task aside for the time being. cmore, I appreciate your help regardless, though, since this information may be useful to myself or to others in the future. (I'll go ahead and mark your post as the solution.)
    Thanks,
    -Jon

  • Master Data Export - PC00_M10_OTEM - Information

    Does anyone know where or if there is any detailed documentation on using the standard delivered Master Data Export program?  We are currently using it in production but have questions as to what/when/why certain information is picked up passed to ADP.

    Thanks for the information!  But I'm looking for more details such as:
    What triggers an IDoc to be created for an individual?
    How does the program pick which items get included in the IDoc?
    How do I trouble shoot why information came over, why some information keeps coming over or why some information isn't coming over?
    We keep seeing unexpected things coming over in our IDocs even though nothing was changed.

  • InfoUser Master Data Export using scc8

    Hi Team,
    Iam doing User Master Data Export using scc8 .I have 1 question regarding this:
    1)When i took export using scc8 only 1 TR is created in cofiles folder and i cant see that TR in data-files folder.
    Can anybody help me in this??
    More info:
    We are going to rebuild the system so we need to preserve the user master record data.
    Regards,
    Abhilash
    Edited by: gundala$ on Feb 29, 2012 8:10 AM
    Edited by: gundala$ on Feb 29, 2012 8:10 AM

    Hi,
    Kindly go through the following link.
    http://forums.sdn.sap.com/thread.jspa?threadID=1310350
    Anil

  • Bulk data export

    Hi,
    Our users need export large data based on ambiguous attribute queries. For example, export emails of customers to send promotions. This kind of usage cannot build with dimensional model. I find Endeca is a good tools for our scenario. The only question is whether Endeca supports large data export.
    Thanks

    Cheney,
    EID Studio has components like the results table or results list that support record export. You can read more about this functionality in the StudioUsersGuide found here: http://docs.oracle.com/cd/E29805_01/StudioUsersGuide.pdf
    It is important to understand what specific volumes are expected for your "large data exports". Studio has safeguards which control maximum allowable export sizes, both in terms of endeca records (for results tables showing raw endeca records and analytic recoreds (for results tables employing EQL to aggregrate the endeca records to the grain of some other entity). These safeguard limits can be found in the "Framework Settings" section of the Studio control panel. Relaxing these defaults to allow for larger export sizes can put extreme memory and CPU load on both Studio and the Endeca Server, so it is important to understand the performance implications of this approach.
    Thanks,
    Dan

  • Essbase Data Export not Overwriting existing data file

    We have an ODI interface in our environment which is used to export the data from Essbase apps to text files using Data export calc scripts and then we load those text files in a relational database. Laetely we are seeing some issue where the Data Export calc script is not overwriting the file and is just appending the new data to the existing file.
    The OverWriteFile option is set to ON.
    SET DATAEXPORTOPTIONS {
         DataExportLevel "Level0";
         DataExportOverWriteFile ON;     
    DataExportDimHeader ON;
         DataExportColHeader "Period";
         DataExportDynamicCalc ON;
    The "Scenario" variable is a substitution variable which is set during the runtime. We are trying to extract "Budget" but the calc script is not clearing the "Actual" scenario from the text file which was the scenario that was extracted earlier. Its like after the execution of the calc script, the file contains both "Actual" and "Budget" data. We are not able to find the root cause as in why this might be happening and why OVERWRITEFILE command is not being taken into account by the data export calc script.
    We have also deleted the text data file to make sure there are no temporary files on the server or anything. But when we ran the data export directly from Essbase again, then again the file contained both "Actual" as well as "Budget" data which really strange. We have never encountered an issue like this before.
    Any suggestions regarding this issue?

    Did some more testing and pretty much zoomed on the issue. Our Scenario is actually something like this "Q1FCST-Budget", "Q2FCST-Budget" etc
    This is the reason why we need to use a member function because Calc Script reads "&ODI_SCENARIO" (which is set to Q2FCST-Budget) as a number and gives an error. To convert this value to a string we are using @member function. And, this seems the root cause of the issue. The ODI_Scenario variable is set to "Q2FCST-Budget", but when we run the script with this calculation function @member("&ODI_SCENARIO"), the data file brings back the values for "Q1FCST-Budget" out of nowhere in addition to "Q2FCST-Budget" data which we are trying to extract.
    Successful Test Case 1:
    1) Put Scenario "Q2FCST-Budget" in hard coded letters in Script and ran the script
    e.g "Q2FCST-Phased"
    2) Ran the Script
    3) Result Ok.Script overwrote the file with Q2FCST-Budget data
    Successful Case 2:
    1) Put scenario in @member function
    e.g. @member("Q2FCST-Budget")
    2) Results again ok
    Failed Case:
    1) Deleted the file
    2) Put scenario in a substitution variable and used the member function "@member("&ODI_Scenario") and Ran the script . *ODI_SCENARIO is set to Q@FCST-Budget in Essbase variables.
    e.g. @member("&ODI_SCENARIO")
    3) Result : Script contained both "Q1FCST-Budget" as well as "Q2FCST-Budget" data values in the text file.
    We are still not close to the root cause and why is this issue happening. Putting the sub var in the member function changes the complete picture and gives us inaccurate results.
    Any clues anyone?

  • Unable to stop incorrect date exports

    How do we set up a form in Adobe Acrobat XI that allows dates to be formatted a certain way (mmm/dd/yyyy) and exported in the same way to Excel and always be recognized as a "proper" date in Excel?
    Currently the following does not work (Attempt #1):
    Set up a field; Set the format category as date; Set up a Custom format of "mmm/dd/yyyy"
    Create a distribution file
    When users fill out the form if they type in an incorrect date, eg., "August 27 2013", the form automagically shows the date on the PDF as "Aug/27/13" - Great!
    When the users submit the form and it's brought into the response file the dates are shown in a default date format of mm/dd/yyyy - Fine, once the form owners understand this
    When the form owners export the information the data exported is the same as the original users entered it, not as it was automagically formatted to. For instance, if submitters originally entered "August 27, 2013" then that's what goes across to Excel. And some of these formats Excel doesn't know how to convert. - Understandably frustrating for form owners
    Attempt #2: As a workaround we set up special formatting that has a mask of "AAA/99/9999". This at least provides forces the users to use the same formatting, but is confusing the submitters when they need to enter dates from 1-9 and we've also found that the conversion of this format to a date in Excel doesn't work, but at least it's consistent! Javascript was also added to force users to use specific month abbreviations.
    d = new Date(event.value.replace(/-/g, " "));
    if (!Date.parse(d)){app.alert("Please enter in a proper date.\n\nThe month abbreviations are: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec");}
    Attempt #3: The last attempt was to continue to use the set up as Attempt #1, but to also use the javascript from Attempt #2. The theory being that if a user entered in "August 27 2013" the javascript would complain. Alas, the javascript appears to run after Adobe automagically does its date format conversion.
    Does anyone know how to get around this or have any other ideas to either enforce a usable date format or have Adobe export the dates as they've been automatically formatted to? We've tried to find a way to turn off the automatic date conversion that Adobe's running, but haven't found a way yet. Another option seemed to be to allow a masking that allowed for optional characters (so that the "0" wouldn't be needed for the dates 1-9) but there doesn't seem to be one.
    Thanks in advance!

    Since there was no clear way to ensure that the date formatting was correct prior to exporting, we're going to get the respondants to use drop downs to ensure the formatting is correct. Not the most convenient for the users though as they're accustomed to being able to type in the values to select it (e.g., for the date of 23 they would expect to enter 2 then 3 for 23) based on other applications, but the Adobe pull downs don't "group" what's been entered (e.g., 2 then 3 will select 30, not 23) and so it will take them a bit to get used to it. I still can't believe that Adobe wouldn't simply export what it's been formatted to though... after all that's what we set the form up for.

  • Data Export error when migrating cucm cluster on version 7.1.5 to cucm 10.0

    Hi
    Has anyone come across below? If so any suggestions for workaround?
     01, 2014 11:54 PDT
       STATUS
        The task has been scheduled.
    Oct 01, 2014 11:54 PDT
         INFO
         Export task action ID #154 with 1 node(s) scheduled.
    Oct 01, 2014 11:54 PDT
      STATUS
               The task has started.
    Oct 01, 2014 11:54 PDT
        INFO 
        Export task action ID #154 with 1 node(s) started.
    Oct 01, 2014 11:54 PDT
       INFO
        Export job for node xx.xx.xx started.
    Oct 01, 2014 12:09 PDT
       ERROR
       Data export failed for node xx.xx.xx.
    Oct 01, 2014 12:09 PDT
      ERROR
        Export job for node xx.xx.xx failed.
    Oct 01, 2014 12:09 PDT
       ERROR
        1 nodes(s) in Export task action ID #154 failed: xx.xx.xx
    Oct 01, 2014 12:09 PDT
       ERROR
        Task paused due to task action failures.

    Hi,
    you can login PCD through putty for seeing the logs
    file list activelog tomcat/logs/ucmap/log4j/ detail date
    further, run 
    file tail activelog tomcat/logs/ucmap/logs4j/ucmap00001.log[for example]
    regds,
    aman

  • What rights are needed to do a Data Export?

    I would like someone from our help desk to be able to do Data-->Export
    in Console1 to export inventory data regularly. When they try this,
    they immediately get the following error:
    Data Export will not proceed. Unable to identify the type of
    installation.
    I saw TID 10088974 but I don't think it applies because if I login to
    the same PC, it works. (Running Console1 locally).
    I also saw another TID indicating user needs Browse rights to the
    ZEN_invDatabase object. [Public] does have rights to this.
    What rights do I need to grant?
    -Marc Johnson

    On Wed, 15 Jun 2005 22:22:40 GMT, Marc Johnson wrote:
    > What rights do I need to grant?
    they will need the read right to the properties of the database object...
    Marcus Breiden
    Please change -- to - to mail me.
    The content of this mail is my private and personal opinion.
    http://www.edu-magic.net

Maybe you are looking for

  • Error While activation of transformation

    HI All Gurus, I created transformation. i maped KZW1 ( CURR - 13 ) WITH 0SUBTOT_1S ( CURR - 9) AND WAERK (CUKY - 5) WITH 0STAT_CURR ( CUKY - 5). Same with KZW2, KZW3, KZW4, KZW5 And KZW6. bUt when i going to activete the error comes like Rule 24 (tar

  • Shocked by lack of interest in buying my Power PC or Mac Pro

    This isn't really meant to be a question, but rather a statement . . . of astonishment. I guess I'm like, waaaaaay out of touch, but has it reached the point where desktop computers are pretty much obsolete? I guess I'm not talking about iMacs so muc

  • Error while starting OEM

    HI, I got an error like this while opening the OEM. oracle.net.config.servicealiasexception:TNS-04404:no error caused by:oracle.net.config.configexception:TNS-04414:file error caused by:TNS-04605: invalid syntx error: unexpected char or literal"DIGTE

  • NEF's have low resolution

    hi folks, I'm dragging some NEF files into a photoshop cs5 document.  They have high resolution (around 2000 pixels on a side or so).  I'm making a collage, so I drag the NEF's into the photoshop document, resize and rotate, and then hit "enter" to p

  • Can't use pitch and time machine

    Hi, I've got a big problem with audio files. When I go on the sample editor and I select all the sample I can't use on the menu factory the following functions : time and pitch machine, grove machine and many others the only functions which are avala