SSM7.5 SP3 - Loading Bootstrap Data Issue
Hi,
I'm installing SSM7.5 SP03 but I'm stuck in the Loading Bootstrap step in the installation process.
When I try to do this in the Transporter page:
1. In a Web browser, enter this URL to open the Tools utility:
http://<nwce_server>:<port>/strategy/tools
2. Click Transporter.
3. To load bootstrap data and demo/sample data, browse for the import.zip file located in the
\<install-dir>\SAP BusinessObjects\Strategy Management\InternetPub\conf directory.
I cannot do the step 3 because I don't have the "browse" button !?!?!?!
<a href="http://img444.imageshack.us/img444/2812/bootstap.jpg">http://img444.imageshack.us/img444/2812/bootstap.jpg</a>
What I'm doing wrong?
Thanks in advance
Pedro
Pedro,
If you don't have a Browse button, you should have a picklist with the import.zip or bootstrap.zip file there. These files, plus whatever file you Transported, should be in
[Install Drive]:usr\SAP\[db instance name - default is CE1]\JOO\j2ee\cluster\server0\temp\SSM\import
Make sure the files are there. You should then see those in the picklist in Transporter.
Regards,
Bob
Similar Messages
-
Unable to load provider data, syncing issues with update 9.1
Since the install of iTunes 9.1, I have had to reinstall it twice and still get the same issues:
1) Says "iTunes can not sync - disabled on this computer"
2) iTunes was unable to load provider data from sync services.
So I can sync songs, videos, and apps. But can no longer sync calender, contacts, etc... This all started right after the last update, up until then it was fine and no other program changes have been made on the PC.
Reinstalling / repairing iTunes does not resolve the issue. Any ideas on this?You are not alone that is the same issue I get. It then tells me it can not save the backup to my computer. *** is going on with the 9.1 update!
-
Issue while loading master data from BI7 to BPC
Dear Experts,
I'm trying to load master data from BI7 to BPC NW using scenario 2 mentioned in the below document.
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00380440-010b-2c10-70a1-e0b431255827
My requirement is need to load 0GL_ACCOUNT attribute and text data from BI7 system to BPC.
1.As mentioned in the How to...doc I had created a dimension called GL_ACCOUNT using BPC Admin client .
2.Able to see GL_ACCOUNT in RSA1, when I try to create a Transformation(step 17 , page-40) to load Attribute data I could not find source of transformation object as 0GL_ACCOUNT(which exist in BI7) . I could only able to see only dimensions available in BPC system when I click F4 in Name.
What could be the reason to not getting BI infoobject as source in BPC?
Thanks in advance...
regards,
RajuDear Gurus,
My issue got resolved. So far I'm trying to pull data from R/3>BW>BPC. In the existing land scape BW and BPC are 2 different boxes. That is the reason I couldn't able to see BW objects into BPC (since 2 are different boxes). To resolve the issue I have created a new infoobect (RSD1) in BPC and data loading is from R/3>BPC infoobject(which is created through RSD1)>BPC Dimension.
Thanks and regards,
Raju -
Issue while loading Master Data through Process Chain in Production
Hi All,
We are getting an error in Process chain while loading Master Data
Non-updated Idocs found in Source System
Diagnosis
IDocs were found in the ALE inbox for Source System that are not updated.
Processing is overdue.
Error correction:
Attempt to process the IDocs manually. You can process the IDocs manually using the Wizard or by selecting the IDocs with incorrect status and processing them manually.
I had checked the PSA also but I could not find any record and the strange thing is, Job itself is not getting scheduled. Can any one help me out in order to resolve this issue.
Regards
BhanumathiHi
This problem is not related to Process chain..
u can try this..
In RSMO, select the particular load you want to monitor.
In the menu bar, Environment >>> Transact. RFC >>> Select whichever is required, BW or Source System.
In the next screen you can select the Execute button and the IDOCS will be displayed.
Check Note 561880 - Requests hang because IDocs are not processed.
OR
Transact RFC - status running Yellow for long time (Transact RFC will be enabled in Status tab in RSMO).
Step 1: Goto Details, Status get the IDoc number,and go to BD87 in R/3,place the cursor in the RED IDoc entroes in tRFC
queue thats under outbound processing and click on display the IDOC which is on the menu bar.
Step 2: In the next screen click on Display tRFC calls (will take you to SM58 particular TRFC call)
place the cursor on the particular Transaction ID and go to EDIT in the menu bar --> press 'Execute LUW'
(Display tRFC calls (will take you to SM58 particular TRFC call) ---> select the TrasnID ---> EDIT ---> Execute LUW)
Rather than going to SM58 and executing LUW directly it is safer to go through BD87 giving the IDOC name as it will take you
to the particular TRFC request for that Idoc.
OR
Go into the JOB Overview of the Load there you should be able to find the Data Package ID.
(For this in RSMO Screen> Environment> there is a option for Job overview.)
This Data Package TID is Transaction ID in SM58.
OR
SM58 > Give * / user name or background (Aleremote) user name and execute.It will show you all the pending TRFC with
Transaction ID.
In the Status Text column you can see two status
Transation Recorded and Transaction Executing
Don't disturb, if the status is second one Transaction Executing. If the status is first one (Transation Recorded) manually
execute the "Execute LUWs"
OR
Directly go to SM58 > Give * / user name or background (Aleremote) user name and execute. It will show TRFCs to be executed
for that user. Find the particular TRFC (SM37 > Req name > TID from data packet with sysfail).select the TrasnID (SM58) --->
EDIT ---> Execute LUW
(from JituK)
Hope it helps
Darshan -
Load transaction data from ECC to BPC 10.1 using US GAAP starter kit SP3
I need to understand a bit more the process to load transactional data into BPC 10.1. We have US GAAP Starter Kit SP 3. Below is an screenshot from the config guide:
It explains how transactional data can be extracted from ECC system to the SAP Netweaver BW data source 0FI_GL_10 and then transform it
to get it loaded into BPC. The Objects /PKG/FC_C01 and PKG/FC_DS01 are just mentioned here as reference because they come in the RDS for Financial Close and Disclosure Management, which not all companies have.
I believe the upwards data flow should be from Data Source 0FI_GL_10 to Data Store Object 0FIGL_O10 and then to InfoCube 0FIGL_R10. There is also a data flow that goes from 0FI_FL_10 to InfoCube 0FIGL_R10. Can anyone with experience with US GAAP starter kit answer this?
Thank you.Hello, we were able to load actuals to our environment with the US GAAP Starter Kit, SP03. I followed the Operating Guide document up to section 5.2 and ran the Consolidation Data Manager Package with no issue. We are using the A20 Input and A20 Consolidation process flows based on flows F00, F10, F20, F30, F99, etc... According to the documentation, the Statemnet of Cash Flow and Changes in Equity should be automatically calculated and available from the changes
in Balance Sheet accounts once Consolidation is completed . However, when I ran the corresponding reports, they bring no data.
We loaded actual data for the whole 2013 and Jan 2014. Our intention is
to run the first consolidation for Jan 2014. The closing balance for period 12.2013 in flow 99 was copied to flow 00 2014 (opening balance). Flows 20 and 30 were updated with the corresponding balance sheet movements (increase/decrease), according to the delivered controls in the starter kit. However, cash flow is still showing no results.
I found the following text in the operating guide, but I am not clear if I am missing a step. Can you please clarify? This is telling me that I need to copy my 01.2014 opening balance (F00) to 12.2013 closing balance (F99, which is done by the copy opening data manager package, but in the opposite direction, from Y-1 F00 to Y F99) and in addition to also copy that balance to 12.2013 F00 (previous year opening balance)??
"5.2.2 First Consolidation
When operating the consolidation for a given Scope for the first time in the application, it is necessary to populate and process the prior year-end time period for this Scope, in addition to the first required consolidation period. This is because the flows dedicated to the scope change analysis are properly populated by the consolidation engine provided that at least one automatic journal entry has been detected in the consolidation used as opening consolidation, which by default is the prior year end (December, Y-1). To accomplish this, it is recommended that you copy all input-level opening data of the first required consolidation (flow “F00”) into the closing data (“F99”) and opening data (“F00”) of the prior year-end time period, including intercompany breakdowns.
This breakdown by intercompany in the balance sheet triggers automatic eliminations, which are carried over to the closing position. After the consolidation process, the closing data of the prior year
end time period will contain the appropriate source data for the opening data of the following consolidation, notably the opening automatic journal entries. -
XML Publisher Issue, Unable to see Data-- Load XML Data Option.
Hi ,
Can any one help to solve this issue.
Iam using XML Publisher tool to develop Oracle reports.
while doing this iam unable to useXML Publisher.
We have,
XML Publisher 5.5 version,
My problem is iam unable to see " *Microsoft word-->Data-->Load XML Data* " after installation of xml publisher succesfully.
Every one in my team are working with the same versions.but in my machine iam unable to get this option.
Thanks-
Sowmya.Hi Sairam,
Check the checkbox Convert database Null values to default and Convert other Null values to default under Edit -> Report Options and then refresh the report.
If this does not work you can contact SAP Support.
- Nrupal -
Idoc issue while loading the data into BI
Hello Gurus,
Initially i am having the Source System connection problem. After it was fixed by the basis i had followed the below process.
I am loading the data using the Generic extractor specifying the selection and its a full load. When i load the data from R3 PP application i found the below result in the monitor screen of BW.
1. Job was completed in the R3 system and 1 million records are fetched by extractor.
2. the records are not posted to the BW side Because of TRFC issue.
3. i got the idoc number and process it in the BD87 tCode. But it was not process sucessfully it gives the following error " Error when saving BOM. Please see the application log."
4. when i check the Application Log using the Tcode SLG1 with the help of time and date of that particular process it was in the yelow status.
Kindly let me know i can i resolve this issue. i have already tried by repeating the infopackage job still i am facing the same issue. I have also check the connection its is ok.
Regardshello veerendra,
Thanks for your quick response. yes i am able to process it manually. after processing it was ended with the status 51 application not posted.
could you pls help me out with the same
regard
Edited by: KK on Nov 4, 2009 2:19 AM
Edited by: KK on Nov 4, 2009 2:28 AM -
.csv flat file loading data issue
Environment: SQL Server 2008 R2
Purpose: Load source data which are stored in a csv file to the destination table
Problem: After setting up flat file transformation, including the columns in the output alias, see the following picture:
,because SSIS set datatype to 50 char while the length is 65. I also set up response string filed to 100 in the destination table, An error message showed as the following picture
What I could do differently to overcome this issue. please adviseThere are a number of them but not associated with that field
[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.
1. [Flat File Source [511]] Error: Data conversion failed. The data conversion for column "Version_cd" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
2. [Flat File Source [511]] Error: The "output column "Version_cd" (604)" failed because truncation occurred, and the truncation row disposition on "output column "Version_cd" (604)" specifies failure on truncation.
A truncation error occurred on the specified object of the specified component.
3. [Flat File Source [511]] Error: An error occurred while processing file "C:\Users\data\ETL files\dummydata.csv" on data row 2.
4. [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Flat File Source" (511) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput().
The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
5. Task Data Flow Task failed -
We have a tool built in excel + Powerquery version 2.18.3874.242 - 32 Bit (No PowerPivot) using data load to data model (not to workbook). There are data filters linked to excel cells, inserted in OData query before data is pulled.
The Excel tool uses organisational credentials to authenticate.
System config: Win 8.1, Office 2013 (32 bit)
The tool runs for all users as long as they do not upgrade to PowerQuery_2.20.3945.242 (32-bit).
Once upgraded users can no longer get the data to load to the Model. Data still loads to the Workbook but the model breaks down. Resetting load to data model erases all measures.
Here are the exact errors users get:
1. [DataSource.Error] Cannot parse OData response result. Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
2. The Data Model table could not be refreshed: There isn't enough memory to complete this action. Try using less data or closing other applications. To increase memory available, consider ......Hi Nitin,
Is this still an issue? If so, can you kindly provide the details that Hadeel has asked for?
Regards,
Michael Amadi
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
Website: http://www.nimblelearn.com, Twitter:
@nimblelearn -
Duplicate Key issue while Loading BSEG data
Hello Experts,
I was loading and not replicating BSEG data into HANA and it seems that after loading around 100 million record, it went into an error with the below error message in SLG1:
I don't think we can ever have a duplicate record in BSEG table itself for a given combination of Company Code, Accounting Document, Fiscal Year and Item. Then how come we are having this issue in replication.
There is a SLT piece of code which we have, wherein we are performing two things:
a) Loading only 2013 data
b) Defaulting for Invalid/Blank Date Issue as shown below
It would be great, if somebody can throw some light on this.
Thanks & regards,
JomyHi,
have you implemented this note in the system?
1992138
SLT (2011 SP06 / 2010 SP10) - Correction 09
Best,
Tobias -
Issue while loading master data from BI to BPC
Dear All,
I am loading master data from BI to BPC using the process chain /CPMB/IMPORT_IOBJ_MASTER. I ran the package and package status was success. But few member ID's which are available in BI are not getting loaded in BPC. Nearly I have 1300 ID's in BI. Out of 1300,only 1270 ID's are loaded in BPC.
I haven't restrict any ID's using conversion file as well.
What could be the reason? How can I overcome this and load all member's ID in BPC?
Thanks & Regards,
Ramesh.Ramesh,
Whats the write mode you have chosen for the CPMB/IMPORT_IOBJ_MASTER chain ?
Check if you have the Update mode ..
Hope this helps... -
Issue in loading master data in BI 7.0
Hi,
I fail to understand how to load master data in BI 7.0. Can you please fill in the steps.
1. Created an Infoarea.
2. Created 2 infoobject catalogs ( for char and kf)
3. Created characteristics like empname and month and salary as KF.
4. For empname, attached the attributes dept and address (master data).
Now what steps should I follow to load master data of all employees and what should be my flat file format ?
thanksRSA1 - D Cclick on FlatFile Source System - RClick on the App Comp - Create Datasource of type - Master Data Attributes - Enter the required - Read the File in Extraction Tab - Field Tab - Add Infoobjects already Created under Infoobject Column - Check the Preview to see it shows data fine - Activate the datasource
Create Transformation btn Datasource and Infobject - Activate it
Create Infopackage on Master Attr Datsource - Run it
Create DTP - Run it to load from PSA to IO -
Langauage issue -when loading master data text
I have a customer master data attribute and text .
This is master data is customized one. With in function module based on sy-langu it will extract the English,German,French.... customer data
R/3 side -Rsa3 transaction when i logon each logon language it is extracting the correct information with text values based on the logon language.
When I am loading the data for each language it is not loading.It is updating only the english.
I logged on as french ,German loaded data attribute and text also. Still it is not updating in the master data text values with french ,german.
Infopackage selection also I set DE and French.
Master data Infoobject - Text language dependent is marked also.
Advance thanksHello smartone,
I'm facing the same problem right now (only getting EN). So maybe we can share some experience.
On help.sap.com I found the following link:
<a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a659be07211d2acb80000e829fbfe/frameset.htm">http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a659be07211d2acb80000e829fbfe/frameset.htm</a>
Almost at the bottom of the page there is the description about the language vector. The settings made in this vector control the extracted languages. As a standard all installed languages in the system are marked here.
In the following link describes how to change the settings.
<a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/fc/1e6df8bf2a11d2b437006094b9ea64/frameset.htm">http://help.sap.com/saphelp_nw2004s/helpdata/en/fc/1e6df8bf2a11d2b437006094b9ea64/frameset.htm</a>
But to be honest, I do not know how to get to the parameters. Otherwise I would have tried to check/change them. Does anyone have an idea?
Regards,
Martin -
Cfreport & "Error loading image data"
i'm using cf report builder.
I insert in report an image linked to a url
If i execute the report i receive this error
Report data binding error Error loading image data :
http://DOMAIN/PATH/0701.jpg.
the image exists: if i copy the url to my browser, it appears
If i replace link with filesystem path, i receive the same
error
this is the error exception
"Error","jrpp-7","01/29/07","21:13:20","marcappelli_frontend","Report
data binding error Error loading image data :
http://venusia/lavoro/marsportcappelli_it/html/catalogo/2007/0701.jpg.Error
loading byte data :
http://venusia/lavoro/marsportcappelli_it/html/catalogo/2007/0701.jpg
The specific sequence of files included or processed is:
/home/htdocs/clienti/marsportcappelli_it/html/_script/catalogo2007.cfm
coldfusion.runtime.report.Report$ReportDataBindingException:
Report data binding error Error loading image data :
http://venusia/lavoro/marsportcappelli_it/html/catalogo/2007/0701.jpg.
at
coldfusion.runtime.report.Report.runReport(Report.java:303)
at
coldfusion.tagext.lang.ReportTag.doEndTag(ReportTag.java:467)
at
coldfusion.runtime.CfJspPage._emptyTag(CfJspPage.java:1909)
at
cfcatalogo20072ecfm389367973.runPage(/home/htdocs/clienti/marsportcappelli_it/html/_scrip t/catalogo2007.cfm:13)
at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:152)
at
coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:343)
at
coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
at
coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:210)
at
coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:51)
at coldfusion.filter.PathFilter.invoke(PathFilter.java:86)
at
coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:50)
at
coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:52)
at
coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
at
coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
at
coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
at
coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
at coldfusion.CfmServlet.service(CfmServlet.java:105)
at
coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:78)
at
jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:91)
at
jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
at
jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:257)
at
jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:527)
at
jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:204)
at
jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:349)
at
jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:457)
at
jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:295)
at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
any ideas?
any help is very appreciates!
thanks
RobI think the new error you saw was due to the cross-schema issue on 10.1.0.2/10.1.0.3 (as described in another thread on creating pyramids).
You should connect as system to grant the permissions. With 10.1.0.2/10.1.0.3, importFrom should be called from the same schema where the GeoRaster tables are defined. (Upgrading to 10.1.0.4 or higher if you want to perform cross-schema operations.)
Other than the connection part, your scripts looked OK. Please make sure that the file with the specified file name does exist and is on the same machine where the database is running. Note that even if the grant_permission call is executed successfully, it does not necessarily mean the following file operations will always succeed. For example, grant_permission can be called successfully on a file that does not even exist. If a file with the specific name is later created, no permissions are granted on the new file. The case with the schema 'GIS' is similar. You need to make sure that the schema 'GIS' and the file do exist before the grant_permission call; and grant_permission can be called repeatedly, if necessary.
Hope this helps. -
How can I load my data faster? Is there a SQL solution instead of PL/SQL?
11.2.0.2
Solaris 10 sparc
I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
DECLARE
v_inv_id invoice.invoice_id%TYPE;
v_inv_addr_id invoice_address.invoice_address_id%TYPE;
errString invoice_errors.sqlerrmsg%TYPE;
v_guid VARCHAR2 (128);
v_str VARCHAR2 (256);
v_err_loc NUMBER;
v_count NUMBER := 0;
l_start_time NUMBER;
TYPE rec IS RECORD
BILLING_TYPE VARCHAR2 (256),
CURRENCY VARCHAR2 (256),
BILLING_DOCUMENT VARCHAR2 (256),
DROP_SHIP_IND VARCHAR2 (256),
TO_PO_NUMBER VARCHAR2 (256),
TO_PURCHASE_ORDER VARCHAR2 (256),
DUE_DATE DATE,
BILL_DATE DATE,
TAX_AMT VARCHAR2 (256),
PAYER_CUSTOMER VARCHAR2 (256),
TO_ACCT_NO VARCHAR2 (256),
BILL_TO_ACCT_NO VARCHAR2 (256),
NET_AMOUNT VARCHAR2 (256),
NET_AMOUNT_CURRENCY VARCHAR2 (256),
ORDER_DT DATE,
TO_CUSTOMER VARCHAR2 (256),
TO_NAME VARCHAR2 (256),
FRANCHISES VARCHAR2 (4000),
UPDT_DT DATE
TYPE tab IS TABLE OF rec
INDEX BY BINARY_INTEGER;
pltab tab;
CURSOR c
IS
SELECT billing_type,
currency,
billing_document,
drop_ship_ind,
to_po_number,
to_purchase_order,
due_date,
bill_date,
tax_amt,
payer_customer,
to_acct_no,
bill_to_acct_no,
net_amount,
net_amount_currency,
order_dt,
to_customer,
to_name,
franchises,
updt_dt
FROM BACKFILL_INVOICES;
BEGIN
l_start_time := DBMS_UTILITY.get_time;
OPEN c;
LOOP
FETCH c
BULK COLLECT INTO pltab
LIMIT 1000;
v_err_loc := 1;
FOR i IN 1 .. pltab.COUNT
LOOP
BEGIN
v_inv_id := SEQ_INVOICE_ID.NEXTVAL;
v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
v_str := str_parser (pltab (i).FRANCHISES); --function to string parse - this could be done in advance, yes.
v_err_loc := 2;
v_count := v_count + 1;
INSERT INTO invoice nologging
VALUES (v_inv_id,
pltab (i).BILL_DATE,
v_guid,
'111111',
'NONE',
TO_TIMESTAMP (pltab (i).BILL_DATE),
TO_TIMESTAMP (pltab (i).UPDT_DT),
'READ',
'PAPER',
pltab (i).payer_customer,
v_str,
'111111');
v_err_loc := 3;
INSERT INTO invoice_header nologging
VALUES (v_inv_id,
TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
NULL,
pltab (i).BILL_DATE, --invoice_date
pltab (i).TO_PO_NUMBER,
NULL,
pltab (i).net_amount,
NULL,
pltab (i).tax_amt,
NULL,
NULL,
pltab (i).due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
TO_TIMESTAMP (SYSDATE),
TO_TIMESTAMP (SYSDATE),
PLTAB (I).NET_AMOUNT_CURRENCY,
(SELECT i.bc_value
FROM invsvc_owner.billing_codes i
WHERE i.bc_name = PLTAB (I).BILLING_TYPE),
PLTAB (I).BILL_DATE);
v_err_loc := 4;
INSERT INTO invoice_address nologging
VALUES (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH INITIAL',
pltab (i).BILL_DATE,
NULL,
pltab (i).to_acct_no,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 5;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_ACCT_NO,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 6;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH2',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_CUSTOMER,
pltab (i).to_name,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 7;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH3',
pltab (i).BILL_DATE,
NULL,
'SOME PROPRIETARY DATA',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 8;
INSERT
INTO invoice_event nologging (id,
eid,
root_eid,
invoice_number,
event_type,
event_email_address,
event_ts)
VALUES ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
'111111',
'222222',
TRIM (LEADING 0 FROM pltab (i).billing_document),
'READ',
'some_user@some_company.com',
SYSTIMESTAMP);
v_err_loc := 9;
INSERT INTO backfill_invoice_mapping
VALUES (v_inv_id,
v_guid,
pltab (i).billing_document,
pltab (i).payer_customer,
pltab (i).net_amount);
IF v_count = 10000
THEN
COMMIT;
END IF;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (
pltab (i).billing_document,
pltab (i).payer_customer,
errString || ' ' || v_err_loc
COMMIT;
END;
END LOOP;
v_err_loc := 10;
INSERT INTO backfill_invoice_timing
VALUES (
ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
2)
|| ' seconds.',
(SELECT COUNT (1)
FROM backfill_invoice_mapping),
(SELECT COUNT (1)
FROM backfill_invoice_errors),
SYSDATE
COMMIT;
EXIT WHEN c%NOTFOUND;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (NULL, NULL, errString || ' ' || v_err_loc);
COMMIT;
END;Hello
You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
INSERT ALL
INTO invoice nologging
VALUES ( SEQ_INVOICE_ID.NEXTVAL,
BILL_DATE,
my_guid,
'111111',
'NONE',
CAST(BILL_DATE AS TIMESTAMP),
CAST(UPDT_DT AS TIMESTAMP),
'READ',
'PAPER',
payer_customer,
parsed_francises,
'111111'
INTO invoice_header
VALUES ( SEQ_INVOICE_ID.CURRVAL,
TRIM (LEADING 0 FROM billing_document), --invoice_num
NULL,
BILL_DATE, --invoice_date
TO_PO_NUMBER,
NULL,
net_amount,
NULL,
tax_amt,
NULL,
NULL,
due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
SYSTIMESTAMP,
NET_AMOUNT_CURRENCY,
bc_value,
BILL_DATE)
SELECT
src.billing_type,
src.currency,
src.billing_document,
src.drop_ship_ind,
src.to_po_number,
src.to_purchase_order,
src.due_date,
src.bill_date,
src.tax_amt,
src.payer_customer,
src.to_acct_no,
src.bill_to_acct_no,
src.net_amount,
src.net_amount_currency,
src.order_dt,
src.to_customer,
src.to_name,
src.franchises,
src.updt_dt,
str_parser (src.FRANCHISES) parsed_franchises,
'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
i.bc_value
FROM BACKFILL_INVOICES src,
invsvc_owner.billing_codes i
WHERE i.bc_name = src.BILLING_TYPE;Some things to note
1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
HTH
David
Edited by: Bravid on Oct 13, 2011 4:35 PM
Maybe you are looking for
-
How to extract the Financial statement version from R/3 to BW
Hi all, I've used the business content to extract the details of SAP R/3 G/L account details to BW. Now, I need to extract the Financial statement version also from R/3 to SAP. Is there any business content object available for the same? Thanks in ad
-
IPod touch isn't showing up on My Computer
I'm trying to move my pictures off of my iPod and onto my laptop, but when I go to My Computer, my iPod doesn't show up. Forums and discussions said that if it's not updated, it needs to be, so I did, but it's still not showing up.
-
Updated video asset will not update on track
Most frustrating is DVDSP4 says that the video file has been changed and after answering yes, it shows me black thumbnails on the track and black drop zones in the template. I have deleted the Par folder, re-imported the asset and most of the time I
-
I rec'd a new iPad2 for Christmas and cannot figure out how to get the tutorial "lady" to stop talking and naming everything I touch. I'm having to double-click each button after she names it. This CANNOT be how this think works!! Please help!
-
Has anyone seen this error before in Encore? Quick Time system error: number = -1 How do you fix it? I get it while burning a DVD. I have the latest version of Encore and the menu I am using was exported from After Effects. Thanks so much, Heather