Import data in BPC depending on sign

Hello everybody,
I would like to import data using an external file in SAP BPC depending on the sign of the value and the account. If the value of the account (X, for example) if positive, the value must be entered in the same account X. But if the value is negative, the value must be entered in Y account.
I would like to do this without modifying the external file.
Thank you very much in advance.
Best regards,
Fernando

Hi,
First you can check the sales order for its completeness for FT data using FT incompletion procedure.
If data is complete in the sales order, then check the copy control procedure in SPRO -> Sales and distribution -> Billing -> Maintain copy control for billing documents.
At header level, check the field determine export data and assign respective values.
Hope this help to resolve the issue.
Regards

Similar Messages

  • Error when running import data package, BPC 7.5 Microsoft

    When I try to import data to one of my applications the import fails. This happens with external data but also with data I extracted from the application (input via schedule and then exported) I get the below error:
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    TOTAL STEPS  2
    1. Convert Data:         completed  in 0 sec.
    2. Load and Process:     Failed  in 1 sec.
    3. Import:               completed  in 1 sec.
    [Selection]
    FILE=\GRP_CONSOL\ZHYP_DATA\DataManager\DataFiles
    2011_test_schedule.txt
    TRANSFORMATION=\GRP_CONSOL\ZHYP_DATA\DATAMANAGER\TRANSFORMATIONFILES\SYSTEM FILES\IMPORT.XLS
    CLEARDATA= Yes
    RUNLOGIC= Yes
    CHECKLCK= Yes
    [Messages]
    Convert Data
    Success
    Record Count : 12
    Accept Count : 12
    Reject Count : 0
    Skip Count   : 0
    Incorrect syntax near '.'.
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    What can cause this error?
    Marco

    Hi Marco,
    See note [https://service.sap.com/sap/support/notes/1328907].
    This is issue is related to the Replace and Clear option and not having Work Status set up correctly.
    When using the option 'Replace & Clear' with Import packages, the WorkStatus Setting should be not only enabled but also 3 dimensions types 'C' (i.e. Category), 'E' (i.e. Entity) and 'T' (i.e. Time) should be set by 'Yes' or 'Owner'.
    Thanks,
    John

  • Object reference not set to an instance of an object error with Import data

    Hi Experts,
    We are using BPC 7.5M with SQL Server 2008 in Multiserver environment, I am getting an error "Object reference not set to an instance of an object." while running Import data package, earlier we use to get this error sometime(once in a month) but it goes away if we reboot the application server but this time I have rebotted the Application server multiple times but still getting the same error.
    Please Advice.
    Thanks & Regards,
    Rohit

    Hi Rohit,
    please see the sap note 1615837, maybe this help you.
    Best regards
    Roberto Vidotti

  • Beginner trying to import data from GL_interface to GL

    Hello I’m Beginner with Oracle GL and I ‘m not able to do an import from GL_Interface, I have putted my data into GL_interface but I think that something is wrong with it Becose when I try to import data
    with Journals->Import->run oracle answer me that GL_interface is empty!
    I think that maybe my insert is not correct and that's why oracle don't want to use it... can someone help me?
    ----> I have put the data that way:
    insert into gl_interface (status,set_of_books_id,accounting_date, currency_code,date_created,
    created_by, actual_flag, user_je_category_name, user_je_source_name, segment1,segment2,segment3,
    entered_dr, entered_cr,transaction_date,reference1 )
    values ('NEW', '1609', sysdate, 'FRF', sysdate,1008009, 'A', 'xx jab payroll', 'xx jab payroll', '01','002','4004',111.11,0,
    sysdate,'xx_finance_jab_demo');
    insert into gl_interface (status,set_of_books_id,accounting_date, currency_code,date_created,
    created_by, actual_flag, user_je_category_name, user_je_source_name, segment1,segment2,segment3,
    entered_dr, entered_cr,transaction_date,reference1 )
    values ('NEW', '1609', sysdate, 'FRF', sysdate,1008009, 'A', 'xx jab payroll', 'xx jab payroll', '01','002','1005',0,111.11,
    sysdate,'xx_finance_jab_demo');
    ------------> Oracle send me that message:
    General Ledger: Version : 11.5.0 - Development
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    GLLEZL module: Journal Import
    Current system time is 14-MAR-2007 15:39:25
    Running in Debug Mode
    gllsob() 14-MAR-2007 15:39:25sob_id = 124
    sob_name = Vision France
    coa_id = 50569
    num_segments = 6
    delim = '.'
    segments =
    SEGMENT1
    SEGMENT2
    SEGMENT3
    SEGMENT4
    SEGMENT5
    SEGMENT6
    index segment is SEGMENT2
    balancing segment is SEGMENT1
    currency = EUR
    sus_flag = Y
    ic_flag = Y
    latest_opened_encumbrance_year = 2006
    pd_type = Month
    << gllsob() 14-MAR-2007 15:39:25
    gllsys() 14-MAR-2007 15:39:25fnd_user_id = 1008009
    fnd_user_name = JAB-DEVELOPPEUR
    fnd_login_id = 2675718
    con_request_id = 2918896
    sus_on = 0
    from_date =
    to_date =
    create_summary = 0
    archive = 0
    num_rec = 1000
    num_flex = 2500
    run_id = 55578
    << gllsys() 14-MAR-2007 15:39:25
    SHRD0108: Retrieved 51 records from fnd_currencies
    gllcsa() 14-MAR-2007 15:39:25<< gllcsa() 14-MAR-2007 15:39:25
    gllcnt() 14-MAR-2007 15:39:25SHRD0118: Updated 1 record(s) in table: gl_interface_control
    source name = xx jab payroll
    group id = -1
    LEZL0001: Found 1 sources to process.
    glluch() 14-MAR-2007 15:39:25<< glluch() 14-MAR-2007 15:39:25
    gl_import_hook_pkg.pre_module_hook() 14-MAR-2007 15:39:26<< gl_import_hook_pkg.pre_module_hook() 14-MAR-2007 15:39:26
    glusbe() 14-MAR-2007 15:39:26<< glusbe() 14-MAR-2007 15:39:26
    << gllcnt() 14-MAR-2007 15:39:26
    gllpst() 14-MAR-2007 15:39:26SHRD0108: Retrieved 110 records from gl_period_statuses
    << gllpst() 14-MAR-2007 15:39:27
    glldat() 14-MAR-2007 15:39:27Successfully built decode fragment for period_name and period_year
    gllbud() 14-MAR-2007 15:39:27SHRD0108: Retrieved 10 records from the budget tables
    << gllbud() 14-MAR-2007 15:39:27
    gllenc() 14-MAR-2007 15:39:27SHRD0108: Retrieved 15 records from gl_encumbrance_types
    << gllenc() 14-MAR-2007 15:39:27
    glldlc() 14-MAR-2007 15:39:27<< glldlc() 14-MAR-2007 15:39:27
    gllcvr() 14-MAR-2007 15:39:27SHRD0108: Retrieved 6 records from gl_daily_conversion_types
    << gllcvr() 14-MAR-2007 15:39:27
    gllfss() 14-MAR-2007 15:39:27LEZL0005: Successfully finished building dynamic SQL statement.
    << gllfss() 14-MAR-2007 15:39:27
    gllcje() 14-MAR-2007 15:39:27main_stmt:
    select int.rowid
    decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
    , '', replace(ccid_cc.SEGMENT2,'.','
    ') || '.' || replace(ccid_cc.SEGMENT1,'.','
    ') || '.' || replace(ccid_cc.SEGMENT3,'.','
    ') || '.' || replace(ccid_cc.SEGMENT4,'.','
    ') || '.' || replace(ccid_cc.SEGMENT5,'.','
    ') || '.' || replace(ccid_cc.SEGMENT6,'.','
    , replace(int.SEGMENT2,'.','
    ') || '.' || replace(int.SEGMENT1,'.','
    ') || '.' || replace(int.SEGMENT3,'.','
    ') || '.' || replace(int.SEGMENT4,'.','
    ') || '.' || replace(int.SEGMENT5,'.','
    ') || '.' || replace(int.SEGMENT6,'.','
    ') ) flexfield , nvl(flex_cc.code_combination_id,
    nvl(int.code_combination_id, -4))
    , decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
    , '', decode(ccid_cc.code_combination_id,
    null, decode(int.code_combination_id, null, -4, -5),
    decode(sign(nvl(ccid_cc.start_date_active, int.accounting_date-1)
    - int.accounting_date),
    1, -1,
    decode(sign(nvl(ccid_cc.end_date_active, int.accounting_date +1)
    - int.accounting_date),
    -1, -1, 0)) +
    decode(ccid_cc.enabled_flag,
    'N', -10, 0) +
    decode(ccid_cc.summary_flag, 'Y', -100,
    decode(int.actual_flag,
    'B', decode(ccid_cc.detail_budgeting_allowed_flag,
    'N', -100, 0),
    decode(ccid_cc.detail_posting_allowed_flag,
    'N', -100, 0)))),
    decode(flex_cc.code_combination_id,
    null, -4,
    decode(sign(nvl(flex_cc.start_date_active, int.accounting_date-1)
    - int.accounting_date),
    1, -1,
    decode(sign(nvl(flex_cc.end_date_active, int.accounting_date +1)
    - int.accounting_date),
    -1, -1, 0)) +
    decode(flex_cc.enabled_flag,
    'N', -10, 0) +
    decode(flex_cc.summary_flag, 'Y', -100,
    decode(int.actual_flag,
    'B', decode(flex_cc.detail_budgeting_allowed_flag,
    'N', -100, 0),
    decode(flex_cc.detail_posting_allowed_flag,
    'N', -100, 0)))))
    , int.user_je_category_name
    , int.user_je_category_name
    , 'UNKNOWN' period_name
    , decode(actual_flag, 'B'
         , decode(period_name, NULL, '-1' ,period_name), nvl(period_name, '0')) period_name2
    , currency_code
    , decode(actual_flag
         , 'A', actual_flag
         , 'B', decode(budget_version_id
         , 1210, actual_flag
         , 1211, actual_flag
         , 1212, actual_flag
         , 1331, actual_flag
         , 1657, actual_flag
         , 1658, actual_flag
         , NULL, '1', '6')
         , 'E', decode(encumbrance_type_id
         , 1000, actual_flag
         , 1001, actual_flag
         , 1022, actual_flag
         , 1023, actual_flag
         , 1024, actual_flag
         , 1048, actual_flag
         , 1049, actual_flag
         , 1050, actual_flag
         , 1025, actual_flag
         , 999, actual_flag
         , 1045, actual_flag
         , 1046, actual_flag
         , 1047, actual_flag
         , 1068, actual_flag
         , 1088, actual_flag
         , NULL, '3', '4'), '5') actual_flag
    , '0' exception_rate
    , decode(currency_code
         , 'EUR', 1
         , 'STAT', 1
         , decode(actual_flag, 'E', -8, 'B', 1
         , decode(user_currency_conversion_type
         , 'User', decode(currency_conversion_rate, NULL, -1, currency_conversion_rate)
         ,'Corporate',decode(currency_conversion_date,NULL,-2,-6)
         ,'Spot',decode(currency_conversion_date,NULL,-2,-6)
         ,'Reporting',decode(currency_conversion_date,NULL,-2,-6)
         ,'HRUK',decode(currency_conversion_date,NULL,-2,-6)
         ,'DALY',decode(currency_conversion_date,NULL,-2,-6)
         ,'HLI',decode(currency_conversion_date,NULL,-2,-6)
         , NULL, decode(currency_conversion_rate,NULL,
         decode(decode(nvl(to_char(entered_dr),'X'),'X',1,2),decode(nvl(to_char(accounted_dr),'X'),'X',1,2),
         decode(decode(nvl(to_char(entered_cr),'X'),'X',1,2),decode(nvl(to_char(accounted_cr),'X'),'X',1,2),-20,-3),-3),-9),-9))) currency_conversion_rate
    , to_number(to_char(nvl(int.currency_conversion_date, int.accounting_date), 'J'))
    , decode(int.actual_flag
         , 'A', decode(int.currency_code
              , 'EUR', 'User'
         , 'STAT', 'User'
              , nvl(int.user_currency_conversion_type, 'User'))
         , 'B', 'User', 'E', 'User'
         , nvl(int.user_currency_conversion_type, 'User')) user_currency_conversion_type
    , ltrim(rtrim(substrb(rtrim(substrb(int.reference1, 1, 50)) || ' ' || int.user_je_source_name || ' 2918896: ' || int.actual_flag || ' ' || int.group_id, 1, 100)))
    , rtrim(substrb(nvl(rtrim(int.reference2), 'Journal Import ' || int.user_je_source_name || ' 2918896:'), 1, 240))
    , ltrim(rtrim(substrb(rtrim(rtrim(substrb(int.reference4, 1, 25)) || ' ' || int.user_je_category_name || ' ' || int.currency_code || decode(int.actual_flag, 'E', ' ' || int.encumbrance_type_id, 'B', ' ' || int.budget_version_id, '') || ' ' || int.user_currency_conversion_type || ' ' || decode(int.user_currency_conversion_type, NULL, '', 'User', to_char(int.currency_conversion_rate), to_char(int.currency_conversion_date))) || ' ' || substrb(int.reference8, 1, 15) || int.originating_bal_seg_value, 1, 100)))
    , rtrim(nvl(rtrim(int.reference5), 'Journal Import 2918896:'))
    , rtrim(substrb(nvl(rtrim(int.reference6), 'Journal Import Created'), 1, 80))
    , rtrim(decode(upper(substrb(nvl(rtrim(int.reference7), 'N'), 1, 1)),'Y','Y', 'N'))
    , decode(upper(substrb(int.reference7, 1, 1)), 'Y', decode(rtrim(reference8), NULL, '-1', rtrim(substrb(reference8, 1, 15))), NULL)
    , rtrim(upper(substrb(int.reference9, 1, 1)))
    , rtrim(nvl(rtrim(int.reference10), nvl(to_char(int.subledger_doc_sequence_value), 'Journal Import Created')))
    , int.entered_dr
    , int.entered_cr
    , to_number(to_char(int.accounting_date,'J'))
    , to_char(int.accounting_date, 'YYYY/MM/DD')
    , int.user_je_source_name
    , nvl(int.encumbrance_type_id, -1)
    , nvl(int.budget_version_id, -1)
    , NULL
    , int.stat_amount
    , decode(int.actual_flag
    , 'E', decode(int.currency_code, 'STAT', '1', '0'), '0')
    , decode(int.actual_flag
    , 'A', decode(int.budget_version_id
    , NULL, decode(int.encumbrance_type_id, NULL, '0', '1')
    , decode(int.encumbrance_type_id, NULL, '2', '3'))
    , 'B', decode(int.encumbrance_type_id
    , NULL, '0', '4')
    , 'E', decode(int.budget_version_id
    , NULL, '0', '5'), '0')
    , int.accounted_dr
    , int.accounted_cr
    , nvl(int.group_id, -1)
    , nvl(int.average_journal_flag, 'N')
    , int.originating_bal_seg_value
    from GL_INTERFACE int,
    gl_code_combinations flex_cc,
    gl_code_combinations ccid_cc
    where int.set_of_books_id = 124
    and int.status != 'PROCESSED'
    and (int.user_je_source_name,nvl(int.group_id,-1)) in (('xx jab payroll', -1))
    and flex_cc.SEGMENT1(+) = int.SEGMENT1
    and flex_cc.SEGMENT2(+) = int.SEGMENT2
    and flex_cc.SEGMENT3(+) = int.SEGMENT3
    and flex_cc.SEGMENT4(+) = int.SEGMENT4
    and flex_cc.SEGMENT5(+) = int.SEGMENT5
    and flex_cc.SEGMENT6(+) = int.SEGMENT6
    and flex_cc.chart_of_accounts_id(+) = 50569
    and flex_cc.template_id(+) is NULL
    and ccid_cc.code_combination_id(+) = int.code_combination_id
    and ccid_cc.chart_of_accounts_id(+) = 50569
    and ccid_cc.template_id(+) is NULL
    order by decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
    , rpad(ccid_cc.SEGMENT2,30) || '.' || rpad(ccid_cc.SEGMENT1,30) || '.' || rpad(ccid_cc.SEGMENT3,30) || '.' || rpad(ccid_cc.SEGMENT4,30) || '.' || rpad(ccid_cc.SEGMENT5,30) || '.' || rpad(ccid_cc.SEGMENT6,30)
    , rpad(int.SEGMENT2,30) || '.' || rpad(int.SEGMENT1,30) || '.' || rpad(int.SEGMENT3,30) || '.' || rpad(int.SEGMENT4,30) || '.' || rpad(int.SEGMENT5,30) || '.' || rpad(int.SEGMENT6,30)
    ) , int.entered_dr, int.accounted_dr, int.entered_cr, int.accounted_cr, int.accounting_date
    control->len_mainsql = 16402
    length of main_stmt = 7428
    upd_stmt.arr:
    update GL_INTERFACE
    set status = :status
    , status_description = :description
    , je_batch_id = :batch_id
    , je_header_id = :header_id
    , je_line_num = :line_num
    , code_combination_id = decode(:ccid, '-1', code_combination_id, :ccid)
    , accounted_dr = :acc_dr
    , accounted_cr = :acc_cr
    , descr_flex_error_message = :descr_description
    , request_id = to_number(:req_id)
    where rowid = :row_id
    upd_stmt.len: 394
    ins_stmt.arr:
    insert into gl_je_lines
    ( je_header_id, je_line_num, last_update_date, creation_date, last_updated_by, created_by , set_of_books_id, code_combination_id ,period_name, effective_date , status , entered_dr , entered_cr , accounted_dr , accounted_cr , reference_1 , reference_2
    , reference_3 , reference_4 , reference_5 , reference_6 , reference_7 , reference_8 , reference_9 , reference_10 , description
    , stat_amount , attribute1 , attribute2 , attribute3 , attribute4 , attribute5 , attribute6 ,attribute7 , attribute8
    , attribute9 , attribute10 , attribute11 , attribute12 , attribute13 , attribute14, attribute15, attribute16, attribute17
    , attribute18 , attribute19 , attribute20 , context , context2 , context3 , invoice_amount , invoice_date , invoice_identifier
    , tax_code , no1 , ussgl_transaction_code , gl_sl_link_id , gl_sl_link_table , subledger_doc_sequence_id , subledger_doc_sequence_value
    , jgzz_recon_ref , ignore_rate_flag)
    SELECT
    :je_header_id , :je_line_num , sysdate , sysdate , 1008009 , 1008009 , 124 , :ccid , :period_name
    , decode(substr(:account_date, 1, 1), '-', trunc(sysdate), to_date(:account_date, 'YYYY/MM/DD'))
    , 'U' , :entered_dr , :entered_cr , :accounted_dr , :accounted_cr
    , reference21, reference22, reference23, reference24, reference25, reference26, reference27, reference28, reference29
    , reference30, :description, :stat_amt, '' , '', '', '', '', '', '', '', '' , '', '', '', '', '', '', '', '', '', '', ''
    , '', '', '', '', '', '', '', '', '', gl_sl_link_id
    , gl_sl_link_table
    , subledger_doc_sequence_id
    , subledger_doc_sequence_value
    , jgzz_recon_ref
    , null
    FROM GL_INTERFACE
    where rowid = :row_id
    ins_stmt.len: 1818
    glluch() 14-MAR-2007 15:39:27<< glluch() 14-MAR-2007 15:39:27
    LEZL0008: Found no interface records to process.
    LEZL0009: Check SET_OF_BOOKS_ID, GROUP_ID, and USER_JE_SOURCE_NAME of interface records.
    If no GROUP_ID is specified, then only data with no GROUP_ID will be retrieved. Note that most data
    from the Oracle subledgers has a GROUP_ID, and will not be retrieved if no GROUP_ID is specified.
    SHRD0119: Deleted 1 record(s) from gl_interface_control.
    Start of log messages from FND_FILE
    End of log messages from FND_FILE
    Executing request completion options...
    Finished executing request completion options.
    No data was found in the GL_INTERFACE table.
    Concurrent request completed
    Current system time is 14-MAR-2007 15:39:27
    ---------------------------------------------------------------------------

    As per the error message said, you need to specify a group id.
    as per documentation :
    GROUP_ID: Enter a unique group number to distinguish import data within a
    source. You can run Journal Import in parallel for the same source if you specify a
    unique group number for each request.
    For example if you put data for payables and receivables, you need to put different group id to separate payables and receivables data.
    HTH

  • Best way to Import Data from a SQL Server Table?

    Hi,
    Firstly thanks for looking at this question.
    We need to import data into SAP BPC 5.1 on a twice daily basis and have chosen not to export to a .CSV file but instead to hold all data in a SQL table and import it directly from there.  As part of the import we wish to run the default logic, however only over the data which is imported as opposed to having to running default logic over the entire database after every import.
    We did some research on this topic and the only thing we could find that would work as described above is using the "Import SQL" package.  However we keep experiencing problems with it and have not yet been able to run it successfully; the errors it gives are not consistent from run to run which makes it difficult to start a thread, though we are getting help from the helpdesk at the moment.
    My question here though is - is there another way that someone knows of to import data into SAP BPC from a SQL table, and being able to run default logic over just the data being imported, or is our only hope getting the "Import SQL" package working?
    Any help much appreciated.
    Regards,
    Iain
    Forgot to mention details of our environment:
    SAP BPC v5.0.495, 2 server environment -
    Server 1 (DB/AS/SSIS/File server) = 64bit Windows 2k3 server with 64bit SQL Server Enterprise Edition
    Server 2 (IIS/App server) = 32bit Windows 2k3 server
    Edited by: Iain Hambleton on Jun 17, 2008 3:25 PM

    Lain,
    I recently created SSIS packages that need to work with a staging table that does all kind of manipulations of that data before it is loading into SAP BPC, because it also has to load the data in the drillthrough table within the same package. From one point in my package the data is also in a SQL table so basically the same as in your situation. This also works with an export to CSV within the DTSX file like Alwin said, because in this case you can use the standard transformation and conversion stuff during the load.
    I also needed to limit the data region for logic to the data region that is in the load for currency conversion purposes. This is not very much of a problem. I had a situation where I have an Accounts receivable cube containing a daily time dimension for keyduedates and a datasrc dimension containing weeks. Every week has a complete overview of the open AR items in that week and need to be converted for that week (datasrc) only and not for the whole database every time we load. But by default data would be converted based on the Keyduedates dimension while I wanted the Week dimension to be used as the data region. I solved it by using these rows in the logic:
    *Scope_by=version,Weeks
    *xdim_memberset weeks=%weeks_set%
    *xdim_memberset version = %version_set%
    I can send you the SSIS package and logic if you want. Just send me your details then.
    -Joost
    Edited by: Joost Hoppenbrouwers on Jun 17, 2008 4:25 PM

  • Import data from a Fixed Length File into Oracle database

    Hi,
    I would like to import data from a Fixed Length text file into a table using HTML DB application.
    As of now, i have a .sql file(that uses external table) to import the data into Oracle tables
    I would like to integrate this in my HTML DB application so that the user can directly import the data into the table.
    Sample data
    XXXYYYYZZZZZ
    Data should be read to table that has
    Col1 Col2 Col3
    XXX YYYY ZZZZZ

    Hi,
    I would like to import data from a Fixed Length text
    file into a table using HTML DB application.AFAIK, fixed length imports are not something you can do directly with the HTML DB tools or available APIs.
    As of now, i have a .sql file(that uses external
    table) to import the data into Oracle tables
    I would like to integrate this in my HTML DB
    application so that the user can directly import the
    data into the table.Any fixed-length data needs to have a specification associated with it that indicates which character position begins a new column and what each column represents. Some also include what datatype should be used for each of those.
    If you really want to do this I would suggest that you create a table that you can store the file spec in. It would probably have columns for field name, start position, length or end position, datatype and any alias/column name you might want to apply. You would then use this table in a PL/SQL procedure (probably a package with the main processing procedure and various supporting procedures/functions) to read the file into memory, apply the file spec to each line and then do inserts into your table.
    Obviously, this is just a concept or strategy. Implementing it will depend on your PL/SQL skills and determination. If you want to pursue this strategy I'm sure you can find some jump-starts by doing a search on the PL/SQL forum.
    This may not sound like an answer - but the answer is you need to code it to fit your requirements. Hope that helps.
    Earl

  • I had to create a new profile in Firefox and now, I can't access important data such as bookmarks which I desperately need.

    After two days of not being able to open Firefox, I found an article on this website which suggested that I create a new profile. I followed the instructions (which never stated that I should first backup Firefox or my old profile) After creating the new profile, Firefox is once again working fine but, I am unable to access important data such as bookmarks and history.
    I have followed several suggestion on this website but, I am still unable to find my old profile content.

    As long as you did not uninstall Firefox and remove personal data, your old profile still should be in the same location. Depending on how messed up it was, here are two options:
    '''(1) Use the profile manager to start Firefox up in your old profile and create a fresh bookmarks backup.''' If it didn't run, this won't be useful.
    To start in a different profile, Exit out of Firefox and start it up from the Start menu, search box, using
    firefox.exe -P
    Your old profile may have a name like default.
    Then you can create a fresh backup or export of your bookmarks (or both) to a neutral location (e.g., the Documents folder). These articles have the steps for creating these files:
    * [[Restore bookmarks from backup or move them to another computer]]
    * [[Export Firefox bookmarks to an HTML file to back up or transfer bookmarks]] (note: does not preserve tags you have assigned to bookmarks, if any)
    Exit Firefox, use the profile manager to return to your new profile, then you can either restore the bookmark backup or import the HTML file. The difference is that a restore completely replaces whatever exists now, while an import adds to what you have now.
    * [[Restore bookmarks from backup or move them to another computer]]
    * [[Import Bookmarks from an HTML file]]
    Success?
    '''(2) Restore an old bookmark backup.''' Firefox creates these files regularly, but it might not be perfectly complete.
    To access the old files, I suggest starting from your current (new) profile folder. You can open that from within Firefox using either:
    * "3-bar" menu button > "?" button > Troubleshooting Information
    * (menu bar) Help > Troubleshooting Information
    * type or paste about:support in the address bar and press Enter
    In the first table on the page, click the "Show Folder" button. This should launch a new window showing your ''current'' settings files. Using the address bar of that window, click up to the Profiles folder level and then double-click into your old profile folder, then into its bookmarkbackups folder. Copy the latest backup file to a neutral location (such as your Documents folder).
    Back in Firefox, try to restore the older backup. Note: this will completely replace your bookmarks in this profile, so if you have saved new bookmarks you want to save, please perform the optional steps in the following list:
    * Optional (preserve new bookmarks): [[Export Firefox bookmarks to an HTML file to back up or transfer bookmarks]]
    * [[Restore bookmarks from backup or move them to another computer]] - use Choose File to go to your Documents folder to get the old backup
    * Optional (add new bookmarks to the restored set): [[Import Bookmarks from an HTML file]]
    Success?

  • Import data of business partner with elm

    Hi,
    when I create a business partner, I have to fill out some mandatory fields.
    Mandatory fields (SAP CRM standard fields):
    name, street, house no, postal code, city
    Mandatory field (field which created by us):
    searchterm
    I want to import data of business partner with elm. In the first step I want create a mapping format only with the mandatory fields. At this point, I get a problem. I can select all mandatory fields in the mapping field, but not the self created mandatory field u201Csearchtermu201D.
    Do somebody know, what I have to do that the self created field stands in the mapping format for selection?
    Thank you in advance!
    Best regards
    Jasmin Hauser

    Hi Jasmin,
    The following is the procedure to use the ELM functionality in CRM 2007:
    a) Store .csv file on local machine or SAP directory (Line1 will be your field name to map for the first time)
    b) Activate Work Flows in tcode: OOCU (check building block c22)
    c) Enhance BADI - CRM_MKTLIST_BADI, structure CRM_MKTLIST_PER_EXT with "Z" custom fields (note this BADI is filter dependent, which means - the mapping formats created needs to be added in this BADI to function)
    d) Define List type and Origin in customizing (SPRO-CRM-MKT-ELM)
    e) Create Mapping Format (WEBUI -> MKT_Professional business role)
    f) Add Mapping Format to BADI (CRM_MKTLIST_BADI)(tcode-se19. Note: Deactivate the BADI before adding Mapping Format and Activate BADI again)
    g) Implement SAP Note: 915015 to create the ELM Members in business partner role "Prospect"
    h) Create the External List (WEBUI->MKT_Professional business role). Use the mapping format created, make sure that your .csv file don't not have the fields name, data to upload should be in the same order as mapping format fields)
    Cheers,
    Peter J.

  • How to retract data from BPC Netwaver to SAP R/3

    Hi all,
    I have three questions concerning the retraction of data from BPC to R/3:
    1)Can someone explain me how i can send data from BPC to R/3 (via SAPBI)??
    2)What's a better choice => Can i do this from an infocube or a DSO?
    3)And what is the role of the BAPI's "BAPI_0050_CREATION" and "BAPI_TRANSACTION_COMMIT"
    Or are those not important?

    hi,
    The push principle shows a workaround to start an extractor from the source system.... for this you can try APDs (esp for CRM system)and open hub destinations(OH).
    or
      kindly refer to below document which describes about the COPA retraction as an example for the same
    http://tleterme.developpez.com/bw/how/HowTo_COPA_Retraction.pdf
    also below thread explain that in detail about retractionand has diff links that might be helpful to you
    Retractors to SAP ECC from SAP BI
    regards
    laksh

  • Import data from excel file

    Hello.
    Is anybody can help how to import data from excel file to the form created with designer 7.0. Originally there is a script inside the form to populate drop down list and depending from data selected in the ID number drop down list, there will be filled out the description and the prices text fields. But now I have to modify this form with data from excel file, which has more than 30000 lines and put all this data to script is too much.
    So, can somebody know how can I after filling the ID number field , populate the description and price text fields with data from excel file corresponding to this ID number ?
    This form is used in Adobe reader.
    Any comments are welcome.
    Regards,
    Aivar

    Hi
    That's what i said in my prev. Post to clear cache... :)
    disable your cache from nqsconfig.ini
    In cache section of NQSConfig file,
    you find
    ENABLE     =     YES;
    set to NO
    OR
    if you are using data ware house as the source for OBIEE,
    you know that when ETL is done, so just create iBot to purge cache automatically at that particular intervals,
    So that report runs freshly at that time
    And what happened to your View Selector question?
    Edited by: Kishore Guggilla on Jul 3, 2009 3:52 PM

  • Extra column to be added while importing data from flat file to SQL Server

    I have 3 flat files with only one column in a table.
    The file names are bars, mounds & mini-bars.
    Table 'prd_type' has columns 'typeid' & 'typename' where typeid is auto-incremented and typename is bars, mounds & mini-bars.
    I Import data from 3 files to prd_details table. This table has columns 'pid', 'typeid' & 'pname' where pid is auto-incremented and pname is mapped to flat files and get info from them, now i wanted the typeid info to be received from prd_type table.
    Can someone please suggest me on this?

    You can get it as follows
    Assuming you've three separate data flow tasks for three files you can do this
    1. Add a new column to pipeline using derived column transformation and hardcode it to bars, mounds or mini-bars depending on the source
    2. Add a look task based on prd_type table. use query as
    SELECT typeid,typename
    FROM prd_type
    Use full cache option
    Add lookup based on derived column. new column -> prd_type.typename relationship. Select typeid as output column
    3. In the final OLEDB destination task map the typeid column to tables typeid column.
    In case you use single data flow task you need to include a logic based on filename or something to get the hardcoded type value as bars, mounds or mini-bars in the data pipeline
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Import data to spatial from another resoucres?

    Hi all!
    can anybody help me how to import data to oracle spatial from another resoucre such as : google map, bing...?
    thanks
    Edited by: OBIEE.vn on Dec 3, 2010 8:24 PM

    It depends how did You saved acquired data. It is possible for sure.
    First step, for example, would be to acquire data as KML file from par example google earth.
    Than You have an option to manipulate and manage them.
    You can as well convert data from KML to SDO_GEOMETRY data type.
    For this You can use already created SDO_UTIL.FROM_KMLgeometry.
    Note here, that KML input should start with one of the geometry tags like LinearRing, Polygon, etc.
    Additionally, if You manipulated Your KML file and transformed it to simple data without geographic coordinates - spatial reference,
    You can geocode them with par example GEOCODE_ASGEOMETRY function of oracle geocoder.

  • Not able to Import data using "clear and replace"

    Hi,
    If I import data using the data admin package "Import" and "Merge" as 'method for importing' the process runs without problems.
    If I change the 'method for importing' to "Clear and Replace" the process fails. See message:
    TOTAL STEPS  2
    1. Convert Data:         completed  in 3 sec.
    2. Load and Process:     Failed  in 1 sec.
    3. Import:               completed  in 1 sec.
    [Selection]
    FILE=\UHRENHOLT\LEGAL_DATALOAD\DataManager\DataFiles\\Axapta_Load.txt
    TRANSFORMATION=\UHRENHOLT\LEGAL_DATALOAD\DataManager\TransformationFiles\\Axapta_Load.xls
    CLEARDATA= Yes
    RUNLOGIC= Yes
    CHECKLCK= Yes
    [Messages]
    Key cannot be null.
    Parameter name: key
    I'm uisng the standard data admin package (and thereby the values 0 and 1). For some reason the value 1 is not accepted.
    Any suggestions?
    /Lars

    Hi,
    The "Replace & clear..." feature during data import depends on Work Status. So to use this functionality, you need to setup Work Status under your application. Notice that you need to setup Work Status even if you aren't selecting the option to check Work Status when running the package.
    Hope this will help you.
    Kind Regards,
    Patrick

  • Import Data using Full Table Name

    Is there a way to run the import data wizard so when pulling data from a csv it generates an insert statement not only using the table name but using the Oracle Schema name -- ie fully qualified table name?

    Hi Nilanjan,
    I need help ASAP.
    About this dump-load task, how does it works?  Because I was checking out a package called Import SQL  and this one only imports data from a table, however this table has to be allocated within the database that is being used, and I right? Are these 2 related somehow?
    Does this task order BPC to find data from a SQL Table (allocated in a different server, different instance for SQL) and import it to SQL Fac2 table?
    Can you help me with a simple explanation what do I need to do to run this task?  The page help.sap.com  talks shows a section within DumpLoad Task Usage which is called  Importing Into SQL Server... 
    * Processing the Application using DumpLoad  -
    Importing into SQL
    You can use DumpLoad to process the application as a standalone procedure, or with data import, export, or clear.
    Prerequisites
    The DumpLoad task (OsoftTaskDumpLoad2008.dll) is registered with Microsoft SSIS. See Registering Custom Tasks.
    Procedure
    1.Open a package or create a new package in Microsoft SSIS on the Planning and Consolidation server.  i already did this for the DumpLoad task ... should I do this for the Import SQL task also ?
    2.Select the task and add it to the package.   WHAT DOES SELECT THE TASK  MEAN?
    3.Open the task, and choose  Data Management  None .
    4.Enter the application set, application, and user ID.
    Im getting confused, can you please provide more details about the whole procedure. Thanx in advance
    Velázquez

  • Materialized View with "error in exporting/importing data"

    My system is a 10g R2 on AIX (dev). When I impdp a dmp from other box, also 10g R2, in the dump log file, there is an error about the materialized view:ORA-31693: Table data object "BAANDB"."MV_EMPHORA" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    Desc mv_emphora
    Name              Null?    Type
    C_RID                      ROWID
    P_RID                      ROWID
    T$CWOC            NOT NULL CHAR(6)
    T$EMNO            NOT NULL CHAR(6)
    T$NAMA            NOT NULL CHAR(35)
    T$EDTE            NOT NULL DATE
    T$PERI                     NUMBER
    T$QUAN                     NUMBER
    T$YEAR                     NUMBER
    T$RGDT                     DATEAs i ckecked here and Metalink, I found the info is less to do with the MV? what was the cause?

    The total lines are 25074. So I used the GREP from the OS to get the lines involved with the MV. Here are:
    grep -n -i "TTPPPC235201" impBaanFull.log
    5220:ORA-39153: Table "BAANDB"."TTPPPC235201" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
    5845:ORA-39153: Table "BAANDB"."MLOG$_TTPPPC235201" exists and has been truncated. Data will be loaded but all dependent meta data will be skipped due to table_exists_action of truncate
    8503:. . imported "BAANDB"."TTPPPC235201"                     36.22 MB  107912 rows
    8910:. . imported "BAANDB"."MLOG$_TTPPPC235201"               413.0 KB    6848 rows
    grep -n -i "TTCCOM001201" impBaanFull.log
    4018:ORA-39153: Table "BAANDB"."TTCCOM001201" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
    5844:ORA-39153: Table "BAANDB"."MLOG$_TTCCOM001201" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
    9129:. . imported "BAANDB"."MLOG$_TTCCOM001201"               9.718 KB      38 rows
    9136:. . imported "BAANDB"."TTCCOM001201"                     85.91 KB     239 rows
    grep -n -i "MV_EMPHORA" impBaanFull.log
    8469:ORA-39153: Table "BAANDB"."MV_EMPHORA" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
    8558:ORA-31693: Table data object "BAANDB"."MV_EMPHORA" failed to load/unload and is being skipped due to error:
    8560:ORA-12081: update operation not allowed on table "BAANDB"."MV_EMPHORA"
    25066:ORA-31684: Object type MATERIALIZED_VIEW:"BAANDB"."MV_EMPHORA" already exists
    25072: BEGIN dbms_refresh.make('"BAANDB"."MV_EMPHORA"',list=>null,next_date=>null,interval=>null,implicit_destroy=>TRUE,lax=>
    FALSE,job=>44,rollback_seg=>NULL,push_deferred_rpc=>TRUE,refresh_after_errors=>FALSE,purge_option => 1,parallelism => 0,heap_size => 0);
    25073:dbms_refresh.add(name=>'"BAANDB"."MV_EMPHORA"',list=>'"BAANDB"."MV_EMPHORA"',siteid=>0,export_db=>'BAAN'); END;the number in front of each line is the line number of the import log.
    Here is my syntax of import pmup:impdp user/pw  SCHEMAS=baandb DIRECTORY=baanbk_data_pump DUMPFILE=impBaanAll.dmp  LOGFILE=impBaanAll.log TABLE_EXISTS_ACTION=TRUNCATEYes I can create the MV manually and I have no problem to refresh manually it after the inmport.

Maybe you are looking for

  • PL/SQL API for Creating Sales Order

    Hi, I am new to Oracle SOA 11g. I have done the Deleting Sales Order using PL/SQL API ( Oracle Apps Adapter ) [ API Name: OE_ORDER_PUB.DELETE_ORDER ] in SOA BPEL. While executing the BPEL, it is correctly deleting Sales Order in E Business Suite whic

  • 4.5 OS for Verizon 8830 - memory issues

    I upgraded my VZ 8830 to 4.5, and it looks great, but I have noticed I am losing memory even if the phone is not "active" (translation:  not on a call, not using the browser or apps, etc.).  I started out with about 16MB and went to 14MB in about 5 m

  • Use Google Account in iChat ?

    I have signed up to a Google account (don't mix this up with Gmail account) using my .me.com email which i would like to use in Google Talk, i can signin using my web broswer and accept friends as well as chat. I am trying to add this to iChat and tr

  • Need your Assistance..pl do the needful.

    Hi Experts, I am a Newbie to OWB, There is a scheduled process flow which is running every day morning and it is an incremental load ( I mean the target of that process flow is an incremental type). so below are the few points which i would like to k

  • Bpel process subscribing event not work

    Hi all, I have a process subscribing to an AbortWithdraw event by receive activity, then a mediator to publish this AbortWithdraw event.( previously, I try publish and subscribe the event both by bpel process, but fail to receive, then found a post a