Data Loader On Demand Inserting Causes Duplicates on Custom Objects

Hi all,
I am having a problem that i need to import around 250,00 records on a regular basis so have built a solution using Dataloader with two processes, one to insert and one to update. I was expecting that imports that had an existing EUI would fail therefore only new records would get inserted (as it says in the PDF) but it keeps creating duplicates even when all the data is exactly the same.
does anyone have any ideas?
Cheers
Mark

Yes, you have encountered an interesting problem. There is a field on every object labelled "External Unique Id" (but it is inconsistent as to whether there is a unique index there or not). Some of the objects have keys that are unique and some that seemingly have none. The best way to test this is to use the command line bulk loader (because the GUI import wizard can do both INSERT/UPDATE in one execution, you don't always see the problem).
I can run the same data over and over thru the command line loader with the option to INSERT and you don't get unique key constraints. For example, ASSET and CONTACT, and CUSTOM OBJECTS. Once you have verified whether the bulk loader is creating duplicates or not, that might drive you to the decision of using a webservice.
The FINANCIAL TRANSACTION object I believe has a unique index on the "External Unique Id" field and the FINANCIAL ACCOUNT object has a unique key on the "Name" field I believe.
Hope this helps a bit.
Mychal Manie ([email protected])
Hitachi Consulting

Similar Messages

  • Oracle Data Loader On Demand on EHA Pod

    Oracle Data Loader doesn't work correctly.
    I downloaded it from Staging(EHA Pod).
    And I did the following work.
    1.Move to "config" folder,and update "OracleDataLoaderOnDemand.config".
    hosturl=https://secure-ausomxeha.crmondemand.com
    2.Move to "sample" folder,and change Owner_Full_Name at "account-insert.csv".
    And at the command prompt,run the batch file.
    It runs successfully,but records aren't inserted on EHA Pod.Records exist on EGA Pod.
    This is the log.
    Is Data Loader for only EGA Pod?Would please give me some advices?
    [2012-09-19 14:49:55,281] DEBUG - [main] BulkOpsClient.main(): Execution begin.
    [2012-09-19 14:49:55,281] DEBUG - [main] BulkOpsClient.main(): List of all configurations loaded: {sessionkeepchkinterval=300, maxthreadfailure=1, testmode=production, logintimeoutms=180000, csvblocksize=1000, maxsoapsize=10240, impstatchkinterval=30, numofthreads=1, hosturl=https://secure-ausomxeha.crmondemand.com, maxloginattempts=1, routingurl=https://sso.crmondemand.com, manifestfiledir=.\Manifest\}
    [2012-09-19 14:49:55,281] DEBUG - [main] BulkOpsClient.main(): List of all options loaded: {datafilepath=sample/account-insert.csv, waitforcompletion=False, clientlogfiledir=., datetimeformat=usa, operation=insert, username=XXXX/XXXX, help=False, disableimportaudit=False, clientloglevel=detailed, mapfilepath=sample/account.map, duplicatecheckoption=externalid, csvdelimiter=,, importloglevel=errors, recordtype=account}
    [2012-09-19 14:49:55,296] DEBUG - [main] BulkOpsClientUtil.getPassword(): Entering.
    [2012-09-19 14:49:59,828] DEBUG - [main] BulkOpsClientUtil.getPassword(): Exiting.
    [2012-09-19 14:49:59,828] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Entering.
    [2012-09-19 14:49:59,937] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Sending Host lookup request to: https://sso.crmondemand.com/router/GetTarget
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Host lookup returned: <?xml version="1.0" encoding="UTF-8"?>
    <HostUrl>https://secure-ausomxega.crmondemand.com</HostUrl>
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Successfully extracted Host URL: https://secure-ausomxega.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Exiting.
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Entering.
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Host URL from the Routing app=https://secure-ausomxega.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Host URL from config file=https://secure-ausomxeha.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Successfully updated the config file: .\config\OracleDataLoaderOnDemand.config
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Host URL set to https://secure-ausomxega.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Exiting.
    [2012-09-19 14:50:03,953] INFO - [main] Attempting to log in...
    [2012-09-19 14:50:10,171] INFO - [main] Successfully logged in as: XXXX/XXXX
    [2012-09-19 14:50:10,171] DEBUG - [main] BulkOpsClient.doImport(): Execution begin.
    [2012-09-19 14:50:10,171] INFO - [main] Validating Oracle Data Loader On Demand Import request...
    [2012-09-19 14:50:10,171] DEBUG - [main] FieldMappingManager.parseMappings(): Execution begin.
    [2012-09-19 14:50:10,171] DEBUG - [main] FieldMappingManager.parseMappings(): Execution complete.
    [2012-09-19 14:50:11,328] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): Submitting BulkOpImportGetRequestDetail WS call
    [2012-09-19 14:50:11,328] INFO - [main] A SOAP request was sent to the server to create the import request.
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] SOAPImpRequestManager.sendImportGetRequestDetail(): SOAP request sent successfully and a response was received
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): BulkOpImportGetRequestDetail WS call finished
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): SOAP response status code=OK
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): Going to sleep for 300 seconds.
    [2012-09-19 14:50:20,328] INFO - [main] A response to the SOAP request sent to create the import request on the server has been received.
    [2012-09-19 14:50:20,328] DEBUG - [main] SOAPImpRequestManager.sendImportCreateRequest(): SOAP request sent successfully and a response was received
    [2012-09-19 14:50:20,328] INFO - [main] Oracle Data Loader On Demand Import validation PASSED.
    [2012-09-19 14:50:20,328] DEBUG - [main] BulkOpsClient.sendValidationRequest(): Execution complete.
    [2012-09-19 14:50:20,343] DEBUG - [main] ManifestManager.initManifest(): Creating manifest directory: .\\Manifest\\
    [2012-09-19 14:50:20,343] DEBUG - [main] BulkOpsClient.submitImportRequest(): Execution begin.
    [2012-09-19 14:50:20,390] DEBUG - [main] BulkOpsClient.submitImportRequest(): Sending CSV Data Segments.
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.CSVDataSender(): CSVDataSender will use 1 threads.
    [2012-09-19 14:50:20,390] INFO - [main] Submitting Oracle Data Loader On Demand Import request with the following Request Id: AEGA-FX28VK...
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.sendCSVData(): Creating thread 0
    [2012-09-19 14:50:20,390] INFO - [main] Import Request Submission Status: Started
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.sendCSVData(): Starting thread 0
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.sendCSVData(): There are pending requests. Going to sleep.
    [2012-09-19 14:50:20,406] DEBUG - [Thread-5] CSVDataSenderThread.run(): Thread 0 submitting CSV Data Segment: 1 of 1
    [2012-09-19 14:50:24,328] INFO - [Thread-5] A response to the import data SOAP request sent to the server has been received.
    [2012-09-19 14:50:24,328] DEBUG - [Thread-5] SOAPImpRequestManager.sendImportDataRequest(): SOAP request sent successfully and a response was received
    [2012-09-19 14:50:24,328] INFO - [Thread-5] A SOAP request containing import data was sent to the server: 1 of 1
    [2012-09-19 14:50:24,328] DEBUG - [Thread-5] CSVDataSenderThread.run(): There is no more pending request to be picked up by Thread 0.
    [2012-09-19 14:50:24,328] DEBUG - [Thread-5] CSVDataSenderThread.run(): Thread 0 terminating now.
    [2012-09-19 14:50:25,546] INFO - [main] Import Request Submission Status: 100.00%
    [2012-09-19 14:50:26,546] INFO - [main] Oracle Data Loader On Demand Import submission completed succesfully.
    [2012-09-19 14:50:26,546] DEBUG - [main] BulkOpsClient.submitImportRequest(): Execution complete.
    [2012-09-19 14:50:26,546] DEBUG - [main] BulkOpsClient.doImport(): Execution complete.
    [2012-09-19 14:50:26,546] INFO - [main] Attempting to log out...
    [2012-09-19 14:50:31,390] INFO - [main] XXXX/XXXX is now logged out.
    [2012-09-19 14:50:31,390] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): Interrupted.
    [2012-09-19 14:50:31,390] DEBUG - [main] BulkOpsClient.main(): Execution complete.

    Hi,
    the Data Loader points by default to the production environment regardless if you download it from staging or production.
    To change the pod edit the config file and input the below content:
    hosturl=https://secure-ausomxeha.crmondemand.com
    routingurl=https://secure-ausomxeha.crmondemand.com
    testmode=debug

  • Insert/Query records of custom objects

    We are looking for a generic programmatic solution to query/insert/delete records of custom objects. Using different WSDLs for custom objects and making calls to APIs of multiple stubs is NOT a generic solution. The main purpose of generic solution is – least bother about the number of custom objects present in any OCOD instance and their WSDLs. I am a bit ready to take minor hit on performance and others.
    Any suggestion? Idea?

    We are looking for a generic programmatic solution to query/insert/delete records of custom objects. Using different WSDLs for custom objects and making calls to APIs of multiple stubs is NOT a generic solution. The main purpose of generic solution is – least bother about the number of custom objects present in any OCOD instance and their WSDLs. I am a bit ready to take minor hit on performance and others.
    Any suggestion? Idea?

  • How to best reduce data load on MAC due to duplicate Adobe files?

    I just got hired at a small business. I don't have a lot of experience with MACs, so I need to know some best practices here.
    I am working with CS3, Ai, Ps, Id, and later, Dw.
    It's a magazine publishing company. I have it organizing so each magazine has its folder, and I want to have an "old editions" and a "working edition" folders. Within each, I want to break it down into "Ads this issue", "Links", and "stories".
    The Ads and Links are where I'm concerned. I want to have a copy of each ad's file within that folder, and a copy of all the other files its linked to, so that if the original ads/images get moved, the links won't be disturbed.
    I'm wondering if there is a way to do this without bogging down the machine's HD with duplicates of really large files. The machine moves slow enough as it is.
    I've theorized that I could:
    A) keep the Main "Ads" folder along with the subfolders compressed, and the "old editions" compressed, and have a regular copy in the working folder only. This also works because the ads get edited for different editions sometimes.
    or
    B) Is there a way to do this with Aliases? Being unfamiliar with alias, or even shortcuts, because I haven't worked in an actual production environment yet, I don't know they functionality of linking alias into an ID file. I read a couple of previous posts and the outlook isn't very good for it.
    or
    C) Just place a PDF (or whatever you guys think is the best quality preserving filetype) in with the magazine itself? Then each company could have its own ad folder with all the rest of the files...
    What do you all think? If you can even link me to a post that goes into further detail on which option you think is best, or if  you have a different solution, that would be wonderful. I am open to answers.
    I want to be sure to leave a cleaner computer/work environment then the last few punks who were here... That's my "best practice". Documentation and file organization got drilled into me at Uni.

    Sorry, I am overcaffienated today, this response is kind of long.
    "Data load?" Do you mean that:
    a) handling lots of large files is too much for your computer to handle, or
    b) simply having lots of large files on your hard drive (even if they are not currently in use) slows your computer down?
    Because b) is pretty much impossible, unless you are almost out of space on your system drive. Which can be ameliorated by... buying another drive.
    I once set up an install of InDesign on a Mac for a friend of mine who is chipping away at a big-data math PhD. and who is sick to death of LaTeX. (Can't blame her, really.) Because we are both BSD nerds from way back, she wanted to do what you are suggesting - but instead of thinking about aliases, which you are correct to regard with dubiousness, she wanted to do it with hardlinks. Which worked, more or less. She liked it. Seemed like overkill to me.
    I suspect that this is because she is a highfalutin' academic whereas I am a production wonk in a business. I have to compare the cost of my time resolving a broken-link issue due to a complicated archiving scheme versus Just Buying Another Drive. Having clocked myself on solving problems induced by complicated archival schemes (or failure of overworked project managers to correctly follow the rules for same) I know that it doesn't take many hours of my work invested in combing through archives or rebuilding lost image files to equal Another Drive that I can go out and Just Buy.
    If you set up a reasonable method of file organization, and document it clearly, then you have already saved your organization (and your successors!) significant amounts of time and cash. Hard drive space is cheap. Don't spend your time on figuring out a way to save a few terabytes here and there. In fact, what I'd suggest for you is to try to figure out how many terabytes you've already spent on this question, by figuring out todays ratio of easily purchaseable reliable external hard drives to your unit of preferred currency, then figuring out how many hours you've already spent on the question.
    The only reason I can make this argument is that price-per-unit-of-magnetic-data-storage has, with remarkablly few exceptions, been constantly plummeting for decades, while the space requirements for documentation have been going up comparatively slowly. If you need a faster computer to do your job more efficiently, then price out a SSD drive for your OS and applications and jobs-on-deck, and then show your higher-ups the math that proves that the SSD pays for itself in your saved time within n weeks. My gut feeling these days is that, unless you are seriously underpaid, n is between two and six.
    Finally: I didn't really address your suggested possibilities. Procedure C (placing PDFs) usually works, but you do need to figure out how to make PDFs in such a way as to ensure they play nicely with your print method. Procedure A (compress stuff you don't need anymore) probably works okay, but I hope that you have some sort of command-line scripting ability to be able to quickly route stuff into and out of archives.

  • Data Loader On Demand Proxy Usage for Resume operation

    Hi,
    My project required me to use the proxy feature available in Data Loader R19 release.
    I could use the proxy at command line for insert /update operations.
    However, the same doesnt work for RESUME operation in Data loader.
    Tried using proxy settings from command line as well as property file but to no use.
    Any suggestions...
    Regards,
    Sumeet

    Its a Java application so it may run on your Linux/Unix system, you would have to test to see if works. Last time I checked Oracle only supports the application running on windows.

  • Data Load Wizard not Inserting/Updating all rows

    Hello,
    I am able to run through the whole Data Load Wizard without any problems. It reports that it successfully inserted/updated all the rows, but when I look in the table, I find a few rows that were not updated correctly. Of the entries I've identified that don't get inserted/updated properly, I've noticed they are the same rows that I was having issues with earlier. The issue was a number format error, which I solved by providing an explicit number format.
    Is it possible that the false inserts/updates might still be tied to the number format, or are there other reasons why the data load is failing on only some rows.
    Thanks,
    Brian
    Edited by: 881159 on Mar 14, 2012 5:05 PM

    Hi Brian,
    I am not aware of the situation where you get false results. However, there were some issues with number/date formats that sometime were not properly parsed, and this has been fixed in 4.1.1 patch. would your case be different from the one described in bug 13656397, I will be happy to get more details so that I can take a look at what is going on.
    Regards,
    Patrick

  • Oracle Data Loader On Demand : Account Owner Field mapping

    I was trying to import account records using data loader. For the 'Account Owner' field the data file contains 'User ID', then the import is failed. When I use "Company Sign In ID/User Id" value in the data file it is sucessfull.
    Do we have any way to use the 'User ID" value in the data file for 'Account Owner' field and run data loader suscessfully.

    The answer is no. It is my understanding that you need to map Account Owner to the User Sign In ID, which has the format of:
    <Company Sign In ID>/<User ID>

  • How to detect duplicate for custom object 1

    Hi expert,
    are there any fields in custom object can detect duplicates If These Fields Match?
    we thought it should be "Name" but it's not.
    Thanks, sab.

    Sab, field validation is not going to check for uniqueness of the custom object name. However, you could use field validation to add a value to the custom object name which could make it unique (such as name + rowID or name + created timestamp).

  • Data Loader inserting duplicate records

    Hi,
    There is an import that we need to run everyday in order to load data from another system into CRM On Demand . I have set up a data loader script which is scheduled to run every morning. The script should perform insert operation.
    Every morning a file with new insert data is available in the same location(generated by someone else) & same name. The data loader script must insert all records in it.
    One morning , there was a problem in the other job and a new file was not produced. When the data loader script ran , it found the old file and re-inserted the records (there were 3 in file). I had specified the -duplicatecheckoption parameter as the external id, since the records come from another system, but I came to know that the option works in the case of update operations only.
    How can a situation like this handled in future? The external id should be checked for duplicates before the insert operation is performed. If we cant check on the data loader side, is it possible to somehow specify the field as 'unique' in the UI so that there is an error if a duplicate record is inserted? Please suggest.
    Regards,

    Hi
    You can use something like this:
    cursor crs is select distinct deptno,dname,loc from dept.
    Now you can insert all the records present in this cursor.
    Assumption: You do not have duplicate entry in the dept table initially.
    Cheers
    Sudhir

  • Data loader : Import -- creating duplicate records ?

    Hi all,
    does anyone have also encountered the behaviour with Oracle Data Loader that duplicate records are created (also if i set the option: duplicatecheckoption=externalid) When i am checking the "import request queue - view" the request parameters of the job looks fine! ->
    Duplicate Checking Method == External Unique ID
    Action Taken if Duplicate Found == Overwrite Existing Records
    but data loader have created new records where the "External Unique ID" is already existent..
    Very strange is that when i create the import manually (by using Import Wizard) exactly the same import does work correct! Here the duplicate checking method works correct and the record is updated....
    I know the data loader has 2 methods, one for update and the other for import, however i do not expect that the import creates duplicates if the record is already existing, rather doing nothing!
    Anyone else experiencing the same ?? I hope that this is not expected behaviour!! - by the way method - "Update" works fine.
    thanks in advance, Juergen
    Edited by: 791265 on 27.08.2010 07:25
    Edited by: 791265 on 27.08.2010 07:26

    Sorry to hear about your duplicate records, Juergen. Hopefully you performed a small test load first, before a full load, which is a best practice for data import that we recommend in our documentation and courses.
    Sorry also to inform you that this is expected behavior --- Data Loader does not check for duplicates when inserting (aka importing). It only checks for duplicates when updating (aka overwriting). This is extensively documented in the Data Loader User Guide, the Data Loader FAQ, and in the Data Import Options Overview document.
    You should review all documentation on Oracle Data Loader On Demand before using it.
    These resources (and a recommended learning path for Data Loader) can all be found on the Data Import Resources page of the Training and Support Center. At the top right of the CRM On Demand application, click Training and Support, and search for "*data import resources*". This should bring you to the page.
    Pete

  • Issues with ondemand Data loader

    Hello,
    We are facing 2 issues with on demand data loader.
    Issue 1
    While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
    Issue 2
    While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
    Kindly advise if anyone has come across similar issues. Thanks
    Dipu
    Edited by: user11097775 on Jun 20, 2011 11:46 PM

    Hello,
    We are facing 2 issues with on demand data loader.
    Issue 1
    While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
    Issue 2
    While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
    Kindly advise if anyone has come across similar issues. Thanks
    Dipu
    Edited by: user11097775 on Jun 20, 2011 11:46 PM

  • Announcing 3 new Data Loader resources

    There are three new Data Loader resources available to customers and partners.
    •     Command Line Basics for Oracle Data Loader On Demand (for Windows) - This two-page guide (PDF) shows command line functions specifc to Data Loader.
    •     Writing a Properties File to Import Accounts - This 6-minute Webinar shows you how to write a properties file to import accounts using the Data Loader client. You'll also learn how to use the properties file to store parameters, and to use the command line to reference the properties file, thereby creating a reusable library of files to import or overwrite numerous record types.
    •     Writing a Batch File to Schedule a Contact Import - This 7-minute Webinar shows you how to write a batch file to schedule a contact import using the Data Loader client. You'll also learn how to reference the properties file.
    You can find these on the Data Import Resources page, on the Training and Support Center.
    •     Click the Learn More tab> Popular Resources> What's New> Data Import Resources
    or
    •     Simply search for "data import resources".
    You can also find the Data Import Resources page on My Oracle Support (ID 1085694.1).

    Unfortunately, I don't believe that approach will work.
    We use a similar mechanism for some loads (using the bulk loader instead of web services) for the objects that have a large qty of daily records).
    There is a technique (though messy) that works fine. Since Oracle does not allow the "queueing up" of objects of the same type (you have to wait for "account" to finish before you load the next "account" file), you can monitor the .LOG file to get the SBL 0363 error (which means you can submit another file yet (typically meaning one already exists).
    By monitoring for this error code in the log, you can sleep your process, then try again in a preset amount of time.
    We use this allow for an UPDATE, followed by an INSERT on the account... and then a similar technique so "dependent" objects have to wait for the prime object to finish processing.
    PS... Normal windows .BAT scripts aren't sophisticated enough to handle this. I would recommend either Windows POWERSHELL or C/Korn/Borne shell scripts in Unix.
    I hope that helps some.

  • Data Loader errors

    I have been using data loader for large imports and have received two errors that are causing me issues:
    The first error is from the log file and is: [main] Oracle Data Loader On Demand Import validation FAILED: String index out of range: -1
    The second error is from cmd window when import is in progress: WARNING: Unable to connect to URL: https://secure-ausomxxxx.crmondemand.com/Services/Integration;jsessionid=38dd67911f7fxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx due to java.security.PrivilegedActionExcep
    tion: javax.xml.soap.SOAPException: Error parsing envelope: (2, 7110257) Invalid char in text.
    What characaters are considered invalid in a csv file?
    What does String index out of range: -1 error message mean?
    Thanks in advance for your help or suggestions.
    Edited by: user572322 on Dec 6, 2010 12:41 PM

    Hi
    Have you checked the length of the data with regards to the field type?
    That might cause that error.
    Thanks,
    Mayank

  • Rate data loader

    Hey, Gurus!
    Does anyone know how many records does Oracle Data Loader On Demand send by package?
    I didn't find anything on the documentation (Data Loader FAQ, Data Loader Overview for R17, Data Loader User Guide).
    thanks in advance
    Rafael Feldberg

    Rafael, there is no upper limit for the number of records that the Data Loader can import. However, after doing a test import using the Import Wizard I would recommend keeping the number of records at a reasonable level.

  • PAS Data Load  iin Cube Builder Dimensions

    Hi All,
      Two important questions about SSM Implementation , that are impacting our development
      1 - Is it possible to develop a data load to fill dimensions created in Cube Builder ? For example, the user wants to be able to fill dimensions manually in SSM BUT      wants to load some aditional information via data load un the same Cube created via Cube Builder Tool  What I see is that, when we fill the dimension manually in Cube Builder a internal code is created in PAS Database
          INPUT
          L0M1281099797397 '001'
          but I am in doubt about to reprocude the same code via data load in a PAS Procedure
      2 - My customer in his original system maintains a relationship between Iniatitives Is it possibile to do the same in SAP SSM ? I looked for the documentation anda I had not found anyting
          associated with this
      Regards,
         Cristian

    Hi Cristian,
    Jus for clarification: do you want to modify a dimension that was created through Cube Builder? Or do you want to create and maintain a new dimension without going through Cube Builder?
    If you are trying to create a new dimension, how many times would this dimension changed? And would the change be made manually or would the structure be available in a table?
    I usually would suggest to have the dimension structure in a table and use PAS procedures to create/re-create the dimension. You can assign whatever technical name and label you want, and as long as you maintain the same technical name for each member, you should be able to recreate dimensions without losing any data.
    If this is a dimension that you won't change anymore you can also code it directly in PAS with the structure you find in the other dimensions:
    INPUT
    input_member1_technical_name 'input_member1-label',
    input_member2_technical_name 'input_member2-label',
    input_member3_technical_name 'input_member3-label',
    input_member4_technical_name 'input_member4-label'
    OUTPUT
    output_member1_technical_name 'output_member1-label',
    output_member2_technical_name 'output_member2-label',
    RESULT
    result_member_technical_name 'result_member-label'
    output_member1_technical_name = SUM
    input_member1_technical_name 'input_member1-label',
    input_member2_technical_name 'input_member2-label'
    output_member2_technical_name = SUM
    input_member3_technical_name 'input_member3-label',
    input_member4_technical_name 'input_member4-label'
    result_member_technical_name = SUM
    output_member1_technical_name,
    output_member2_technical_name
    Best regards,
    Ricardo Vieira

Maybe you are looking for

  • Acrobat 9 ad in not working in excel.

    The ad in is there, when I hover mouse over option to create PDF I get additional options, but when I go to choose the option I want, it fades away.  I can create PDF's within Acrobat 9 program, but really need to have the ad in work in excel.

  • Apple TV - iTunes Performance - Thumbails Generation slowing everything dow

    Hello all, I've been wondering if I am the only one with an issue that has been completely diminishing my end-user experience regarding my Apple TV. Whenever I hit the 'Movies' tab when browsing my Apple TV, iTunes will completely freeze for about 4-

  • 24" Imac / Bootcamp / Warranty

    Hi all, I purchased a 24" Imac from apple on the 25th September 2006. Since its purchase it has never worked with boot camp (windows installs and then crashes when the apple drivers (graphics driver) are installed). I am fully aware that because OS/X

  • Trying to restore ipad mini to delete "other" data.

    Software up to date on device, but when I try to restore on itunes, it starts downloading the software update then I get a message that says "There was a problem downloading the software for the ipad.The network connection timed out.  Make sure your

  • XML: Filecontent, data - output

    I can output my xml filcontent.  Now I want to format the output.  How can I see the data relationship.  While researching api's and webservices I've seen people output boxed charts, how is this done.  I want to control how I display the data in the