WF_DEFERRED_QUEUE_M table creation for 11g
We are trying to create WF_DEFERRED_QUEUE_M after upgrade to 11g from 10g
For table WF_DEFERRED_QUEUE_M with
compatible parameter 8.0 works to create the table but refuses to create the queue.
compatilble parameters 8.1 and 10.0 give “object already exists”
all of the compatible parameters work if we specify MULTIPLE_CONSUMERS = FALSE
Please help how do we create WF_DEFERRED_QUEUE_M table with option of MULTIPLE_CONSUMERS=TRUE for 11g
(workflow background process program is failing with error of ORA-24039: Queue WF_DEFERRED_QUEUE_M not created in queue table for multiple consumers)
What is the application release?
We are trying to create WF_DEFERRED_QUEUE_M after upgrade to 11g from 10gHow? Are you using the script in (Workflow Queues Creation Scripts [ID 398412.1])?
For table WF_DEFERRED_QUEUE_M with
compatible parameter 8.0 works to create the table but refuses to create the queue.
compatilble parameters 8.1 and 10.0 give “object already exists”WF_DEFERRED_QUEUE_M is a queue -- http://etrm.oracle.com/pls/trm11510/etrm_pnav.show_object?c_name=WF_DEFERRED_QUEUE_M&c_owner=APPLSYS&c_type=QUEUE
all of the compatible parameters work if we specify MULTIPLE_CONSUMERS = FALSE
Please help how do we create WF_DEFERRED_QUEUE_M table with option of MULTIPLE_CONSUMERS=TRUE for 11g
(workflow background process program is failing with error of ORA-24039: Queue WF_DEFERRED_QUEUE_M not created in queue table for multiple consumers)Have you reviewed these docs (assuming you use the right script to create this queue)?
USING SINGLE-CONSUMER QUEUE CAUSES ORA-24039 [ID 1301605.1]
How to Resolve ORA-24039 on DBMS_AQADM.ADD_SUBSCRIBER [ID 98007.1]
Thanks,
Hussein
Similar Messages
-
We have an homegrown Access database originally designed in 2000 that now has an SQL back-end. The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003. It is fine if suggestions
will only work with Access 2007 or higher.
I'm trying to determine if our database is the best place to do this or if we should look at another solution. We have thousands of products each with a single identifier. There are customers who provide us regular sales reporting for what was
sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important. This reporting may or may not include all of our product identifiers. The reporting is typically based on calendar-defined timing although we have
some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report. Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named. The
product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week. Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
period, sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
current status code for that product, and so on.
Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period). Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time. Is it possible to
set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
categories across specific weeks/months/years, etc. We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries. We do need to maintain
the sales reporting information indefinitely.
I welcome any suggestions, best practice or resources (books, web, etc).
Many thanks!Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period). Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time. Is it possible to
set-up tables .....
I assume you want to migrate to SQL Server.
Your best course of action is to hire a professional database designer for a short period like a month.
Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
Finally you have to hire an SSRS professional to design reports for your company.
It is also beneficial if the above professionals train your staff while building the new RDBMS.
Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
Kalman Toth Database & OLAP Architect
SELECT Video Tutorials 4 Hours
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Reg: Table creation for generic source
Hai,
I tried to create generic data source using a table that I created. But when I tried to enter entries into the table using se16 transaction, one field is not at all coming.
I didnt understand why this field is not coming. I checked the enhancement category, but that is allowing too.. I changed the position of that field in the table and tried. But still its not coming in se16 create entries..
What can be the problem. When we check the table in se12, that field we able to see.. When we check through se16 also its available. Only when we want to make entries its not coming..
Can someone help me to resolve this issue..Hi,
Yes I am able to see the filed in the data source also.
I want to do flat file loading. So I am using LSMW recording for the same. So in se16 i need to create dummy entries. But as this filed is not coming, I cant populate.
When i tried outside lsmw also in SE16 that filed is not there. -
Condition Table creation for Taxes (TAXINN)
I want to create a condition table for Taxing procedure with application <b>TX</b> with the following fields : <b>Plant , Control Code</b>
Kindly tell me the SPRO path where i can create a condition table for Taxes.
Regards
JyotsnaDear Jyotsna,
We can create taxes in two ways.one is FICO side and another one is SD side
in FICO side you can do in FTXP
in SD you can do OVK1
hope this helps you
Prem. -
External Table Creation for .dat file
Hi,
I have created an External Table, below is the code
CREATE TABLE "NFO_DATA_LOAD_STAGING_TEST1"
( "DATA_DISCRAPANCY" VARCHAR2(100),
"APPLICATION_NUMBER" VARCHAR2(30),
"BATCH_NUMBER" VARCHAR2(30),
"AMC_ID" VARCHAR2(30),
"TA_BRANCH" VARCHAR2(30),
"SERVER_DATE_1" DATE,
"SERVER_TIME_1" DATE,
"ARN_NUMBER" VARCHAR2(30),
"BROKER_CODE" VARCHAR2(30),
"SUB_BROKER_CODE_1" VARCHAR2(30),
"SUB_BROKER_CODE_2" VARCHAR2(30),
"SUB_BROKER_CODE_3" VARCHAR2(30),
"BROKER_ADDRESS" VARCHAR2(100),
"FOLIO_NO" NUMBER,
"MODE_OF_HOLDING" VARCHAR2(30),
"STATUS" VARCHAR2(30),
"SALUATION" VARCHAR2(30),
"CONTACT_PERSON_NAME" VARCHAR2(30),
"INVESTOR_NAME" VARCHAR2(30),
"GUARDIAN_NAME" VARCHAR2(30),
"PAN_NO" VARCHAR2(30),
"POA_HOLDER_NAME" VARCHAR2(30),
"KYC" VARCHAR2(30),
"GUARDIAN_PAN_NO" VARCHAR2(30),
"DOB_DOC" DATE,
"GUARDIAN_RELATIONSHIP" VARCHAR2(30),
"SALUATION_2" VARCHAR2(30),
"DATE_OF_BIRTH" DATE,
"SECOND_HOLDER_NAME" VARCHAR2(30),
"GUARDIAN_NAME_2" VARCHAR2(30),
"PAN_NO_2" VARCHAR2(30),
"GUARDIAN_PAN_NO_2" VARCHAR2(30),
"KYC_2" VARCHAR2(30),
"GUARDIAN_RELATIONSHIP_2" VARCHAR2(30),
"SALUATION_3" VARCHAR2(30),
"DATE_OF_BIRTH_3" DATE,
"THIRD_HOLDER_NAME" VARCHAR2(30),
"GUARDIAN_NAME_3" VARCHAR2(30),
"PAN_NO_3" VARCHAR2(30),
"GUARDIAN_PAN_NO_3" VARCHAR2(30),
"KYC_3" VARCHAR2(30),
"GUARDIAN_RELATIONSHIP_3" VARCHAR2(30),
"ADDRESS_LINE_1" VARCHAR2(100),
"PINCODE" NUMBER,
"ADDRESS_LINE_2" VARCHAR2(100),
"CONTACT_NUMBER_O" VARCHAR2(15),
"CITY" VARCHAR2(30),
"CONTACT_NUMBER_R" VARCHAR2(15),
"STATE" VARCHAR2(30),
"MOBILE_NUMBER" VARCHAR2(15),
"COUNTRY" VARCHAR2(30),
"EMAIL_ID" VARCHAR2(30),
"ALT_EMAIL_ID" VARCHAR2(30),
"ADDRESS_LINE_1_A" VARCHAR2(100),
"PINCODE_A" NUMBER,
"ADDRESS_LINE_2_A" VARCHAR2(100),
"CONTACT_NUMBER_O_A" VARCHAR2(15),
"CITY_A" VARCHAR2(30),
"CONTACT_NUMBER_R_A" VARCHAR2(15),
"STATE_A" VARCHAR2(30),
"MOBILE_NUMBER_A" VARCHAR2(15),
"COUNTRY_A" VARCHAR2(30),
"EMAIL_ID_A" VARCHAR2(30),
"ALT_EMAIL_ID_A" VARCHAR2(30),
"DESPATCH_ACCOUNT" VARCHAR2(30),
"REDEMPTION_PAYOUT" VARCHAR2(30),
"I_PIN_ASSIGN" VARCHAR2(30),
"DIVIDEND_PAYOUT" VARCHAR2(30),
"T_PIN_ASSIGN" VARCHAR2(30),
"TO_FUND_NAME" VARCHAR2(30),
"TO_FUND_ID" VARCHAR2(30),
"OPTION_F_S" VARCHAR2(30),
"MCR_NO" NUMBER,
"BANK_NAME" VARCHAR2(30),
"ACCOUNT_NUMBER" NUMBER,
"BANK_BRANCH" VARCHAR2(30),
"ACCOUNT_TYPE" VARCHAR2(30),
"MCR_NO_P" NUMBER,
"CHEQUE_DD_NO" NUMBER,
"BANK_NAME_P" VARCHAR2(30),
"PAYMENT_DATE" DATE,
"BANK_BRANCH_P" VARCHAR2(30),
"ADVICE_NO" NUMBER,
"PAYMENT_TYPE_P" VARCHAR2(30),
"AMOUNT_P" NUMBER,
"MINIMUM_AMOUNT" NUMBER,
"AMOUNT_WORDS" VARCHAR2(1000),
"NOMINEE_NAME" VARCHAR2(30),
"SALUATION_STG" VARCHAR2(30),
"NOMINEE_DOB_STG" DATE,
"GUARDIAN_NAME_STG" VARCHAR2(30),
"RELATIONSHIP_STG" VARCHAR2(30),
"ADDRESS_LINE_1_STG" VARCHAR2(100),
"COUNTRY_STG" VARCHAR2(30),
"ADDRESS_LINE_2_STG" VARCHAR2(100),
"PINCODE_STG" NUMBER,
"CITY_STG" VARCHAR2(30),
"CONTACT_NUMBER_STG" VARCHAR2(15),
"STATE_STG" VARCHAR2(30),
"MOBILE_NUMBER_STG" VARCHAR2(15),
"REMARKS_STG" VARCHAR2(100),
"EXCEPTION_CHK_STG" CHAR(1),
"HOLD_CHK_STG" CHAR(1),
"PAN_STG" CHAR(1),
"BOARD_RESOLUTION_STG" CHAR(1),
"KYC_STG" CHAR(1),
"MOA_STG" CHAR(1),
"CHEQUE_STG" CHAR(1),
"ASL_STG" CHAR(1),
"TRUST_DEED_STG" CHAR(1),
"PARTNERSHIP_DEED_STG" CHAR(1),
"BYE_LAWS_STG" CHAR(1),
"AUTO_DEBIT_STG" CHAR(1),
"ENROLLMENT_FORM_STG" CHAR(1),
"APPROVED" VARCHAR2(10),
"CREATED_BY" NUMBER,
"CREATION_DATE" DATE,
"LAST_UPDATE_DATE" DATE,
"LAST_UPDATED_BY" NUMBER,
"FUND_TYPE" VARCHAR2(30)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY DUMP_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
BADFILE 'emp.bad'
LOGFILE 't.log_xt'
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' LDTRIM
REJECT ROWS WITH ALL NULL FIELDS
"DATA_DISCRAPANCY"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"APPLICATION_NUMBER"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BATCH_NUMBER"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"AMC_ID"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"TA_BRANCH"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SERVER_DATE_1"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SERVER_TIME_1"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ARN_NUMBER"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BROKER_CODE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SUB_BROKER_CODE_1"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SUB_BROKER_CODE_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SUB_BROKER_CODE_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BROKER_ADDRESS"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"FOLIO_NO"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MODE_OF_HOLDING"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"STATUS"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SALUATION"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CONTACT_PERSON_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"INVESTOR_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PAN_NO"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"POA_HOLDER_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"KYC"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_PAN_NO"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"DOB_DOC"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_RELATIONSHIP"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SALUATION_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"DATE_OF_BIRTH"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SECOND_HOLDER_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_NAME_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PAN_NO_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_PAN_NO_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"KYC_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_RELATIONSHIP_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SALUATION_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"DATE_OF_BIRTH_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"THIRD_HOLDER_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_NAME_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PAN_NO_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_PAN_NO_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"KYC_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_RELATIONSHIP_3"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADDRESS_LINE_1"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PINCODE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADDRESS_LINE_2"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CONTACT_NUMBER_O"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CITY"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CONTACT_NUMBER_R"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"STATE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MOBILE_NUMBER"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"COUNTRY"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"EMAIL_ID"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ALT_EMAIL_ID"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADDRESS_LINE_1_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PINCODE_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADDRESS_LINE_2_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CONTACT_NUMBER_O_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CITY_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CONTACT_NUMBER_R_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"STATE_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MOBILE_NUMBER_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"COUNTRY_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"EMAIL_ID_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ALT_EMAIL_ID_A"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"DESPATCH_ACCOUNT"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"REDEMPTION_PAYOUT"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"I_PIN_ASSIGN"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"DIVIDEND_PAYOUT"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"T_PIN_ASSIGN"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"TO_FUND_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"TO_FUND_ID"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"OPTION_F_S"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MCR_NO"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BANK_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ACCOUNT_NUMBER"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BANK_BRANCH"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ACCOUNT_TYPE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MCR_NO_P"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CHEQUE_DD_NO"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BANK_NAME_P"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PAYMENT_DATE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BANK_BRANCH_P"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADVICE_NO"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PAYMENT_TYPE_P"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"AMOUNT_P"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MINIMUM_AMOUNT"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"AMOUNT_WORDS"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"NOMINEE_NAME"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"SALUATION_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"NOMINEE_DOB_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"GUARDIAN_NAME_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"RELATIONSHIP_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADDRESS_LINE_1_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"COUNTRY_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ADDRESS_LINE_2_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PINCODE_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CITY_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CONTACT_NUMBER_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"STATE_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MOBILE_NUMBER_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"REMARKS_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"EXCEPTION_CHK_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"HOLD_CHK_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PAN_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BOARD_RESOLUTION_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"KYC_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"MOA_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CHEQUE_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ASL_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"TRUST_DEED_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"PARTNERSHIP_DEED_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"BYE_LAWS_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"AUTO_DEBIT_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"ENROLLMENT_FORM_STG"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"APPROVED"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CREATED_BY"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"CREATION_DATE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"LAST_UPDATE_DATE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"LAST_UPDATED_BY"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " ' ,
"FUND_TYPE"
TERMINATED BY "," OPTIONALLY ENCLOSED BY ' " '
LOCATION
( 'Key_File_Report.dat'
{code}
while trying to select the data from external table, am facing the below error.
{code}
ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "identifier": expecting one of: "and, column, exit, (, ltrim, lrtrim, ldrtrim, missing, notrim, rtrim, reject" KUP-01008: the bad identifier was: LDTRIM KUP-01007: at line 4 column 77
{code}
can anyone suggest me how to resolve this.
Regards,
Sakthi.I think i found the problem:
KUP-01008: the bad identifier was: LDTRIM KUP-01007: at line 4 column 77There is no LDTRIM. It is LDRTRIM or LRTRIM.
this should work
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LDRTRIM{code}
Edited by: Sven W. on Aug 27, 2010 2:59 PM -- removed the blanks in the optionally enclosed spec -
Hi, we are working on Business Objects Migration 6.1 to BOXIR2 project.plz any one help me Derived table creation for Free Hand SQL Reports?.How to use promt functions also.........?
Thanks for advance....
Thanks&Regards
RamuRamu,
Your question is posted in the General forum, but I think you might want to close this entry and re-post in one of the two following forums:
Web Intelligence
[SAP BusinessObjects Web Intelligence;
Universe Designer & Business Views Designer
[Semantic Layer;
Thanks,
John -
Solman table names for BP creation and others
Dear All
I am the new one to Solman.here i want some relevant information
i want the Table names for the follwing areas.
if you have all the table names can u tell me the table names and descriptions
1. Business partner creation.
2.Supports message and its entry flows updated in which table.
3.Change request and its entry flows saved in which table.
Regards
AnandHi,
For lead, activity and opportunity table name is:
1. CRMD_ORDERADM_H - Business Transaction
2. CRMD_ORDERADM_I - Business Transaction Item
Reward points if helpful.
Shridhar
Edited by: Shridhar Deshpande on Jan 30, 2008 7:52 AM -
Spatial index creation for table with more than one geometry columns?
I have table with more than one geometry columns.
I'v added in user_sdo_geom_metadata table record for every column in the table.
When I try to create spatial indexes over geometry columns in the table - i get error message:
ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13203: failed to read USER_SDO_GEOM_METADATA table
ORA-13203: failed to read USER_SDO_GEOM_METADATA table
ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 8
ORA-06512: at line 1
What is the the solution?I'v got errors in my user_sdo_geom_metadata.
The problem does not exists! -
Best Practices for Table creation
Is it a good practice to have a primary key and/or unique key identifier for every tables created for an application even if for some reason the table is only being used as temporary or interface table? Thanks.
Hi,
only being used as temporaryFor temporary tables, lookup "CREATE GLOBAL TEMPORARY TABLE" in TFM
As for me a table without a PK suffers from an error in it's design. PKs are one of Merise's foundations.
Sure there are exceptions, but if more than x% of your tables have no PK, there's a problem.
Regards,
Yoann. -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Table creation - order of events
I am trying to get some help on the order I should be carrying out table creation tasks.
Say I create a simple table:
create table title (
title_id number(2) not null,
title varchar2(10) not null,
effective_from date not null,
effective_to date not null,
constraint pk_title primary key (title_id)
I believe I should populate the data, then create my index:
create unique index title_title_id_idx on title (title_id asc)
But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
At what point does Oracle create the index on my behalf and how do I stop it?
Should I only apply the primary key constraint after the data has been loaded as well?
Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
SQL> select index_name, uniqueness from user_indexes
2 where table_name = 'APC'
3 /
no rows selected
SQL> insert into apc values (1)
2 /
1 row created.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
Table altered.
SQL> insert into apc values (2)
2 /
insert into apc values (2)
ERROR at line 1:
ORA-00001: unique constraint (APC.APC_PK) violated
SQL> alter table apc drop constraint apc_pk
2 /
Table altered.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
Table created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 exceptions into EXCEPTIONS
4 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> select * from apc where rowid in ( select row_id from exceptions)
2 /
COL1
2
2
SQL> All this is in the documentation. Find out more.
Cheers, APC -
How to create monthly table creation?
Hi Mates,
Unable to create table by month in analytic database but load the data to the previous table continuous as attached screenshot, Schema user has the creation privilege. We are using Webcenter interaction 10gR4.
How to create monthly table creation please?
Thanks,
KatherineHi Trevor,
Thanks for your help. We were able to create table and load data till Apr as attached.
However the analytic user privilege has been modified on Apr due to server operation.
Since then, there was a message saying there is no permission to create tables in the analytic log,
analytic user privilege has been granted after checked this message, As I suspected, the issue occurred after modifying analytic user privilege.
Currently, analytic users are granted with all privilege.
Any idea please?
Thanks,
Kathy -
Trigger internal serial nos creation for inbound delivery using DELVRY03
Hi all,
I am working in ECC 6.0 I need to update serial nos in the inbound delivery. The internal serial nos. that will be generated by the system, I have to trap them for a given delivery and map it to a set of external serial nos.(length is 40 chars) and update in a Z table.
For changing the delivery, I am using idoc DELVRY03 (message SHPCON). Within the idoc I am not able to find any such field which will trigger the automatic creation of internal serial nos. In case of idoc MBGMCR03 (goods movement), there is a field E1BP2017_GM_ITEM_CREATE-SERIALNO_AUTO_NUMBERASSIGNMENT which when set to 'X', the internal serial nos are automatically generated and can be found in table SER03.
I want a similar thing for the DELVRY03 idoc. Can anyone help me on this?
If not possible through Idoc, then whats the other option?
My aim is to just get the internal serial nos created through delivery change activity.
Note - The serial no. profile cannot have the option of serial no. usage set to '04 - automatic' in SPRO as per the client's business process.
Thanks,
ShomaHi Shoma,
copy the FM which is assigned to your processcode and copy to ZFM and write your modifications on that ZFM.
It will resolves your problem.
Regards,
Venkat. -
Dynamic Internal Table creation and population
Hi gurus !
my issue refers to the slide 10 provided in this slideshow : https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b332e090-0201-0010-bdbd-b735e96fe0ae
My example is gonna sound dumb, but anyway: I want to dynamically select from a table into a dynamically created itab.
Letu2019s use only EKPO, and only field MENGE.
For this, I use Classes cl_abap_elemdescr, cl_sql_result_set and the Data Ref for table creation. But while fetching the resultset, program dumps when fields like MENGE, WRBTR are accessed. Obviously their type are not correctly taken into account by my program.
Here it comes:
DATA: element_ref TYPE REF TO cl_abap_elemdescr,
vl_fieldname TYPE string,
tl_components TYPE abap_component_tab,
sl_components LIKE LINE OF tl_components_alv,
linetype_lcl TYPE REF TO cl_abap_structdescr,
ty_table_type TYPE REF TO cl_abap_tabledescr,
g_resultset TYPE REF TO cl_sql_result_set
u2026
CONCATENATE sg_columns-table_name '-' sg_columns-column_name INTO vl_fieldname.
* sg_columns-table_name contains 'EKPO'
* sg_columns-column_name contains 'MENGE'
* getting the element as a component
element_ref ?= cl_abap_elemdescr=>describe_by_name( vl_fieldname ).
sl_components-name = sg_columns-column_name.
sl_components-type ?= element_ref.
APPEND sl_components TO tl_components.
* dynamic creation of internal table
linetype_lcl = cl_abap_structdescr=>create( tl_components ).
ty_table_type = cl_abap_tabledescr=>create(
p_line_type = linetype_lcl ).
u2026
* Then I will create my field symbol table and line. Code has been cut here.
CREATE DATA dy_line LIKE LINE OF <dyn_table>.
u2026
* Then I will execute my query. Here itu2019s: Select MENGE From EKPO Where Rownum = 1.
g_resultset = g_stmt_ref->execute_query( stmt_str ).
* Then structure for the Resultset is set
CALL METHOD g_resultset->set_param_struct
EXPORTING
struct_ref = dy_line.
* Fetching the lines of the resultset => Dumpu2026
WHILE g_resultset->next( ) > 0.
ASSIGN dy_line->* TO <dyn_wa>.
APPEND <dyn_wa> TO <dyn_table>.
ENDWHILE.
Anyone has any clue to how prevent my Dump ??
The component for MENGE seems to be described as a P7 with 2 decimals. And the resultset wanna use a QUAN type... or something like that !Hello
I have expanded your sample coding for selecting three fields out of EKPO:
*& Report ZUS_SDN_SQL_RESULT_SET
*& Thread: Dynamic Internal Table creation and population
*& <a class="jive_macro jive_macro_thread" href="" __jive_macro_name="thread" modifiedtitle="true" __default_attr="1375510"></a>
*& NOTE: Coding for dynamic structure / itab creation taken from:
*& Creating Flat and Complex Internal Tables Dynamically using RTTI
*& https://wiki.sdn.sap.com/wiki/display/Snippets/Creating+Flat+and+
*& Complex+Internal+Tables+Dynamically+using+RTTI
REPORT zus_sdn_sql_result_set.
TYPE-POOLS: abap.
DATA:
go_sql_stmt TYPE REF TO cl_sql_statement,
go_resultset TYPE REF TO cl_sql_result_set,
gd_sql_clause TYPE string.
DATA:
gd_tabfield TYPE string,
go_table TYPE REF TO cl_salv_table,
go_sdescr_new TYPE REF TO cl_abap_structdescr,
go_tdescr TYPE REF TO cl_abap_tabledescr,
gdo_handle TYPE REF TO data,
gdo_record TYPE REF TO data,
gs_comp TYPE abap_componentdescr,
gt_components TYPE abap_component_tab.
FIELD-SYMBOLS:
<gs_record> TYPE ANY,
<gt_itab> TYPE STANDARD TABLE.
START-OF-SELECTION.
continued. -
Bad file is not created during the external table creation.
Hello Experts,
I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
Table Creation Scripts:_-------------------------------
create table RGIS_TCA_DATA_EXT
guid VARCHAR2(250),
badge VARCHAR2(250),
scheduled_store_id VARCHAR2(250),
parent_event_id VARCHAR2(250),
event_id VARCHAR2(250),
organization_number VARCHAR2(250),
customer_number VARCHAR2(250),
store_number VARCHAR2(250),
inventory_date VARCHAR2(250),
full_name VARCHAR2(250),
punch_type VARCHAR2(250),
punch_start_date_time VARCHAR2(250),
punch_end_date_time VARCHAR2(250),
event_meet_site_id VARCHAR2(250),
vehicle_number VARCHAR2(250),
vehicle_description VARCHAR2(250),
vehicle_type VARCHAR2(250),
is_owner VARCHAR2(250),
driver_passenger VARCHAR2(250),
mileage VARCHAR2(250),
adder_code VARCHAR2(250),
bonus_qualifier_code VARCHAR2(250),
store_accuracy VARCHAR2(250),
store_length VARCHAR2(250),
badge_input_type VARCHAR2(250),
source VARCHAR2(250),
created_by VARCHAR2(250),
created_date_time VARCHAR2(250),
updated_by VARCHAR2(250),
updated_date_time VARCHAR2(250),
approver_badge_id VARCHAR2(250),
approver_name VARCHAR2(250),
orig_guid VARCHAR2(250),
edit_type VARCHAR2(250)
organization external
type ORACLE_LOADER
default directory ETIME_LOAD_DIR
access parameters
RECORDS DELIMITED BY NEWLINE
BADFILE ETIME_LOAD_DIR:'tstlms.bad'
LOGFILE ETIME_LOAD_DIR:'tstlms.log'
READSIZE 1048576
FIELDS TERMINATED BY '|'
MISSING FIELD VALUES ARE NULL(
GUID
,BADGE
,SCHEDULED_STORE_ID
,PARENT_EVENT_ID
,EVENT_ID
,ORGANIZATION_NUMBER
,CUSTOMER_NUMBER
,STORE_NUMBER
,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
,FULL_NAME
,PUNCH_TYPE
,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,EVENT_MEET_SITE_ID
,VEHICLE_NUMBER
,VEHICLE_DESCRIPTION
,VEHICLE_TYPE
,IS_OWNER
,DRIVER_PASSENGER
,MILEAGE
,ADDER_CODE
,BONUS_QUALIFIER_CODE
,STORE_ACCURACY
,STORE_LENGTH
,BADGE_INPUT_TYPE
,SOURCE
,CREATED_BY
,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,UPDATED_BY
,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,APPROVER_BADGE_ID
,APPROVER_NAME
,ORIG_GUID
,EDIT_TYPE
location (ETIME_LOAD_DIR:'tstlms.dat')
reject limit UNLIMITED;
_***Shell Script*:*----------------_*
version=1.0
umask 000
DATE=`date +%Y%m%d%H%M%S`
TIME=`date +"%H%M%S"`
SOURCE=`hostname`
fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
TXT1_PATH=/home/ac1/oracle/in/tsdata
TXT2_PATH=/home/ac2/oracle/in/tsdata
ARCH1_PATH=/home/ac1/oracle/in/tsdata
ARCH2_PATH=/home/ac2/oracle/in/tsdata
DEST_PATH=/home/custom/sched/in
PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
PROGNAME=`basename $0`
PROGPATH=/home/custom/sched/scripts
cd $TXT2_PATH
FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.dat for i in $FILELIST2
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES2 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST1
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES1 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
cd $TXT1_PATH
FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.datfor i in $FILELIST3
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES3 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST4
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES4 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
#connecting to oracle to generate bad files
sqlplus -s $fcp_login<<EOF
select count(*) from rgis_tca_data_ext;
select count(*) from rgis_tca_data_history_ext;
exit;
EOF
#counting the records in files
tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
#updating log table
echo "pl/sql block started"
sqlplus -s $fcp_login<<EOF
define tot_rec_in_tstlms = '$tot_rec_in_tstlms';
define tot_rec_in_tstlmsedits = '$tot_rec_in_tstlmsedits';
define tot_rec_in_tstlms_bad = '$tot_rec_in_tstlms_bad';
define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
define fcp_reqid ='$fcp_reqid';
declare
l_tstlms_file_id number := null;
l_tstlmsedits_file_id number := null;
l_tot_rec_in_tstlms number := 0;
l_tot_rec_in_tstlmsedits number := 0;
l_tot_rec_in_tstlms_bad number := 0;
l_tot_rec_in_tstlmsedits_bad number := 0;
l_request_id fnd_concurrent_requests.request_id%type;
l_start_date fnd_concurrent_requests.actual_start_date%type;
l_end_date fnd_concurrent_requests.actual_completion_date%type;
l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
l_requested_by fnd_concurrent_requests.requested_by%type;
l_requested_date fnd_concurrent_requests.request_date%type;
begin
--getting concurrent request details
begin
SELECT fcp.concurrent_program_name,
fcr.request_id,
fcr.actual_start_date,
fcr.actual_completion_date,
fcr.requested_by,
fcr.request_date
INTO l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
l_requested_by,
l_requested_date
FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
exception
when no_data_found then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log, 'No data found for request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when retrieving request_id request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlms_file_id,
'tstlms.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlms,
&tot_rec_in_tstlms_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlmsedits_file_id,
'tstlmsedits.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlmsedits,
&tot_rec_in_tstlmsedits_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
end;
exit;
EOF
echo "rgis_tca_to_tlms_process.sql started"
sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
exit;
echo "rgis_tca_to_tlms_process.sql ended"
_**Error:*----------------------------------*_
RGIS Scheduling: Version : UNKNOWN
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
TCATLMS module: TCA To TLMS Import Process
Current system time is 18-AUG-2011 06:13:27
COUNT(*)
16
COUNT(*)
25
wc: cannot open /home/custom/sched/in/tstlms.bad
wc: cannot open /home/custom/sched/in/tstlmsedits.bad
pl/sql block started
old 33: AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
new 33: AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
old 63: &tot_rec_in_tstlms,
new 63: 16,
old 64: &tot_rec_in_tstlms_bad,
new 64: ,
old 97: &tot_rec_in_tstlmsedits,
new 97: 25,
old 98: &tot_rec_in_tstlmsedits_bad,
new 98: ,
ERROR at line 64:
ORA-06550: line 64, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql stddev sum variance
execute forall merge time timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string> pipe
<an alternatively-quoted string literal with character set specification>
<an alternatively-q
ORA-06550: line 98, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql st
rgis_tca_to_tlms_process.sql started
old 12: and concurrent_request_id = '&1';
new 12: and concurrent_request_id = '18661823';
old 18: and concurrent_request_id = '&1';
new 18: and concurrent_request_id = '18661823';
old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
declare
ERROR at line 1:
ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
no data found
ORA-06512: at line 59
Executing request completion options...
------------- 1) PRINT -------------
Printing output file.
Request ID : 18661823
Number of copies : 0
Printer : noprint
Finished executing request completion options.
Concurrent request completed successfully
Current system time is 18-AUG-2011 06:13:29
---------------------------------------------------------------------------Hi,
Check the status of the batch in SM35 transaction.
if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
To Release -Shift+F4.
Also you can analyse the job status through F2 button.
Bye
Maybe you are looking for
-
My 13inch MacBook Pro randomly shut off and won't turn on
My 13 inch MacBook Pro will not turn on at all. It randomly shut off while I was using it and now it won't come back on. The light on the charger is faintly blinking... But it is no longer charging. I tried a different outlet, and a different MacBook
-
Can no longer access Acrobat shared review server
Hello. We are using Acrobat 8, and yesterday I discovered that I can recreate a lost shared-review PDF by generating a new PDF, and then copying the comment-repository XML files from the original review PDF to the folder belonging to the new review P
-
OAS 10G 10.1.2 Oracle_Home Size = 5GB Plus
Dear Experts, I have installed OAS 10G 10.1.2 (Forms & Report Services) five months before. My problem is the size of Oracle_Home which is now 5.15 GB. Which may effect on performace of OAS as currently i need to restart the report server daily. Plea
-
We are waiting for liberate iphone code
HELLO I HAVE PROBLEMS TO SYNC MY IPHONE WITH MY ID , MY CELL COMPANY IN SPAIN TOLD ME THAT THEY ASK TO APPLE TO SET FREE THIS IMEI ******, SO I AM CONNECTING A FEW TIMES A DAY TO ITUNES, TRYING TO UNLOCK MY IPHONE TO USE IT WITH A DIFFERENT SIM FROM
-
Why won't my computer let Itunes download the update
Why won't my computer upgrade from 10.7.0.21 to 11.0.1 on windows XP?