Table creation in Oracle back end
Hi Gurus,
Can we create a oracle table in SAP schema directly from back-end and update it by taking snapshot from other oracle system.
I know its not a recommended solution and DB link / DBCon would be better solution but the problem is other oracle system is not ready to give us access/userID to connect to it using DBlink.
Can we access the table create above using native SQL statement or can we create a synonym to a ztable created using se11.
Thanks in advanced.
Sumit
Hi Bill,
Thanks for the reply. Please check below.
My DTData.txt:
My Agentry.ini file:
My File back end log file:
--In editor I added a new system connection with "File Back end", created new data table with this system connection and created DTData.txt as file name.
--Created "Tables" folder in ServerDev root directory and placed DTData.txt file in this folder and entered the above 1st screen shot text.
--In my ATE i couldn't see this filedatatable.
Thanks,
Swaroopa.
Similar Messages
-
Static Tables Creation In oracle & Diff Between Static table ,Dynamic table
Static Tables Creation In oracle & Diff Between Static table ,Dynamic table
972471 wrote:
Static Tables Creation In oracle & Diff Between Static table ,Dynamic tableAll tables in a well designed application should be static tables.
Whilst it's possible to execute dynamic code and therefore create tables dynamically at run time, this is considered poor design and should be avoided in 99.99% of cases... though for some reason some people still think they need to do this, and it's never really justified.
So what issue are you facing that you need to even bother considering dynamic tables? -
Oracle Back end job procedure and forms 10g
Hi friends,
I have to insert employee joining date into "B_join" Table from "Emp" table , with Flag of "IN" (inserted) after 10 Min the Flag has to be changed "VA"(validated), for this i have to create oracle job in the backend, before
1)shoud i need to create procedure which is inserting data into "B_join" table?
2)the same procedure i have to use it in the job procedure?
3) i have to show Message to the Manager in the forms as "Hello Sir tomorow is joning of emp1,emp2,emp3,,emp4.......( list extends as per empployee).
advice me
Thanks and Regards
VeeKayHi friends i have created this procedure
CREATE OR REPLACE PROCEDURE sms_insert1 is
CURSOR c_update
IS
SELECT *
FROM emp
WHERE TO_CHAR (dob, 'ddmm') = TO_CHAR (TRUNC (SYSDATE + 1), 'dmm');
r_update c_update%ROWTYPE;
m_emp_cpr NUMBER (9);
BEGIN
FOR r_update IN c_update
LOOP
INSERT INTO b_join VALUES ('10000001',
r_update.emp_name,
'EN',
r_update.emp_cpr_id);
EXIT WHEN c_update%NOTFOUND;
END LOOP;
commit;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
dbms_output.put_line ('NO employees joining is tommorow');
dbms_output.put_line ('NO employees joining is tommorow');
WHEN OTHERS
THEN
dbms_output.put_line ('recheck the employee details');
END;
I have to insert employee joining date into "B_join" Table from "Emp" table , with Flag of "IN" (inserted) after 10 Min the Flag has to be changed "VA"(validated), for this i have to create oracle job in the backend.
I have run Scheduled Job
BEGIN
Dbms_Scheduler.create_job(
job_name => 'DEMO_INSERT_NEW'
,job_type => 'STORED_PROCEDURE'
,job_action => 'SMS_INSERT'
,start_date => SYSDATE
,repeat_interval => 'FREQ=MINUTELY'
,enabled => TRUE
,comments => 'JOINING DETAILS INSERT.');
END;
LATER I HAVE RUN THIS PROCEDURE ALSO
SQL> EXEC DBMS_LOCK.SLEEP (10)
data is not inserted in to the table
and how do i change the flag "IN" to "VA" in the table?
shall put the procedure in DB Trigger? -
Refresh table and display the records after insertion of data from back end
Hi Experts,
JDEV 11.1.2.1
I have a useacase which needs data to be inserted from oracle back end procedure and displayed in a ADF Table Component, when a button is pressed.
is this possible?..if yes , how?
Is view object will automatically refreshed and fetches newly created(from back end) rows?
thankz in advance
PMSHi user707,
thankz for ur reply....
i think after executing a procedure you want to call commit operation. better you can perform this.getDbtransaction().commit In Application Module;yes i want to commit transaction after executing back end procedure using preparedStatement.Procedure is for inserting data into same table , which used for creating VO and Read only ADF Table.Procedure is executed fine, but newly created records are not getting into ADF table.Once i did commit opeartion inside back end procedure, whole records are getting into ADF Table.
Is there any way to get whole records without doing Commit operation inside Back end procedure?
PMS -
Can Oracle 9i enable schema/table creation to be transacted?
If anyone can help with this, that would be much appreciated.
So - the server has disabled autocommit and commits/rollbacks are handled by the application. Even though this is the case, Oracle 9i is not rolling back changes that have (i) created schemas/users and/or (ii) tables.
Worse still, it seems to be performing a partial rollback - some tables in a schema are left with data and others are not.
Now, this may be caused by our server creating tables for indexing while adding data to some existing tables - that is the table definitions have auto-committed the transaction to date, also committing the table insertions/updates.
After some delving, the JDBC driver has the following method: dataDefinitionCausesTransactionCommit - for Pointbase and other databases, this returns false - for Oracle it returns true.
The questions are therefore:
1) Is there a solution with Oracle 9i that enables schema and table creation to be transacted?
2) Does Oracle 10g allow definition clauses to be transacted?Actually I believe there is a limited way to make DDL statements transaction based via the CREATE SCHEMA command.
From the 9.2 SQL manaul >>
Use the CREATE SCHEMA to create multiple tables and views and perform multiple grants in a single transaction.
To execute a CREATE SCHEMA statement, Oracle executes each included statement. If all statements execute successfully, Oracle commits the transaction. If any statement results in an error, Oracle rolls back all the statements.
<<
This may be of some limited use to you, but your process should probably be changed to track of the DDL and to undo (drop) any created objects if a rollback is issued.
HTH -- Mark D Powel -- -
Front-end: APEX, Back-end: Oracle 10g
I'm new to Apex but I'd like to have a two tier setup with a front-end server running Apex 3 that is accessible to the Internet and a back-end server running Oracle 10g database that is only accessible from the front-end server. I know how to restrict the access using my firewall, but how do I "point" the apex-server at the db-server?
Hi Troy,
Yes that's correct, the 'pointing' between the OHS and the DB is done via the Database Access Descriptor (DAD), if you read through the APEX installation docs here -
http://download-uk.oracle.com/docs/cd/B32472_01/doc/install.300/b32468/toc.htm
specifically the section on editting the DAD -
http://download-uk.oracle.com/docs/cd/B32472_01/doc/install.300/b32468/post_inst.htm#CHDHCBGI
all should become clear.
Note that the above section on editting the DAD might not be applicable to your exact setup so please take the time to read through the entire installation doc rather than just skipping to the section I've pointed to. -
How to know the status of concurrent program from back-end in oracle apps
Hi,
Can you please explain me step by step how to know the status of the concurrent program from back end in oracle apps.
Thanks,
RajWhen a record is being updated by a form, if you create a Pre-Update trigger on the block, the trigger will run for each record being updated.
Same thing happens with a Pre-Insert and Pre-Delete trigger. -
How to get PO CHANGE HISTORY from back end tables?
Hi All,
I need to get the PO Change history from the back end reason being users want to see not only the change made but who made the changes on those PO's. The PO change history.html doesnot give the information of the user who made changes to the PO. Can any one provide me the query involved in this html page. I can add the updated_by column to that. I built a query already but it is not giving me the field altered on the PO Line.
I learnt that we can get the information from the archive tables but my question is how to know which field has been changed from the original PO to the revised PO. I am unable to figure out if that revision is for Unit Price change or amount change or Bill to change etc. i.e., as it shows in PO change History html page I need to see which field is altered on a particular PO. Is that possible?
Thanks for your help!
Prathima
Message was edited by:
PrathimaHi Anil,
Turning Audit on PO_LINES_ALL helped to track the changes. I have a problem though.
In the sense - Below is the query I created to track changes on Unit Price of a Po Line.
SELECT
poh.segment1 po_num
,pol.line_num po_line_num
,pola.unit_price price_changed_from
,pol.unit_price price_changed_to
,fnd1.description created_by
,fnd.description last_updated_by
,fnd2.description previously_updated_by
--,decode(pola.audit_transaction_type,'U','UPDATE','I','INSERT') audit_transaction_type
,pol.last_update_date
FROM po.po_lines_all_a pola,po.po_lines_all pol,apps.fnd_user fnd,apps.fnd_user fnd1,po.po_headers_all poh,apps.fnd_user fnd2
WHERE pola.po_line_id = pol.po_line_id
AND pola.audit_transaction_type = 'U'
and poh.po_header_id = pol.po_header_id
AND fnd.user_name = pola.audit_user_name
and pola.unit_price is not null
and fnd1.user_id = pol.created_by
and fnd2.user_id(+) = pola.LAST_UPDATED_BY
Now the problem is
When I update a line twice. intially I changed the unitprice value of line 4 from .72 to 0.8 and later I changed it back to 0.8 to 0.72. Intial Changed to 0.72 shuld be 0.8 but it changed to 0.72 as it is pulling from po_lines_all table. How can you capture that changed to???
po_num Line Changed from Changed to
26603 4 0.72000 0.72000
26603 4 0.80000 0.72000
Please help.
Thanks,
Prathima -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Back end activities for Activation & Deactivation of Aggregates
Hi ,
Could any body help me to understand the back-end activites performed at the time of activation and deactivation of aggregates.
Is filling of Agreegate is same as Roll up?
What is the diffrence between de-activation and deletion of Aggregate?
Thanks.
SantanuHi Bose,
Activation:
In order to use an aggregate in the first place, it must be defined activated and filled.When you activate it, the required tables are created in the database from theaggregate definition. Technically speaking, an aggregate is actually a separate BasicCube with its own fact table and dimension tables. Dimension tables that agree with the InfoCube are used together. Upon creation, every aggregate is given a six-digit number that starts with the figure1. The table names that make up the logical object that is the aggregate are then derived in a similar manner, as are the table names of an InfoCube. For example, if the aggregate has the technical name 100001, the fact tables are called: /BIC/E100001 and /BIC/F100001. Its dimensions, which are not the same as those in the InfoCube,have the table names /BIC/D100001P, /BIC/D100001T and so on.
Rollup:
New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called “roll-up”. In other words, data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up.
Go for the below link for more information.
http://sapbibw2010.blogspot.in/2010/10/aggregates.html
Naresh -
How to fing out the details of PO item befor replicating to the back end.
Hi SRM Guru
We are working on the SRM 4.0 and extended classic scenario.
I want to see all the details which are send to the back end for creation of the purchase Order.
I will specify the steps I follow.
Create the SC( waiting for approval ) , Approver approves the SC then SC will be created in SRM and the replicated in the Back end .
I have tried to find out this through the external debugging
I have set the break points @ BBP_DOC_SAVE_BADI and BBP_DOC_CHANGE_BADI, CL_HANDLER ( Methods ), but I am not able to reach the point to the PO creation in SRM and its replication in the back end.
1 Does any one tell me the sequence of the badiu2019s coming?
2 How the PO gets replicated in the back end ?
which bapi(BAPI CREATEPO) is called ?
After which event this bapi called ?
Which data is passed for the creation of PO ?
3 where I will get the information on the structures/internal tables which are getting populated during the debugging.
With Regards
DSSU mean u want to know at runtime?
Get the output from ur proxy as a string, count the number of "<" characters and divide it by 2. :-P
Of course, you'll have to treat the header fields separately.
Regards,
Henrique. -
Bad file is not created during the external table creation.
Hello Experts,
I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
Table Creation Scripts:_-------------------------------
create table RGIS_TCA_DATA_EXT
guid VARCHAR2(250),
badge VARCHAR2(250),
scheduled_store_id VARCHAR2(250),
parent_event_id VARCHAR2(250),
event_id VARCHAR2(250),
organization_number VARCHAR2(250),
customer_number VARCHAR2(250),
store_number VARCHAR2(250),
inventory_date VARCHAR2(250),
full_name VARCHAR2(250),
punch_type VARCHAR2(250),
punch_start_date_time VARCHAR2(250),
punch_end_date_time VARCHAR2(250),
event_meet_site_id VARCHAR2(250),
vehicle_number VARCHAR2(250),
vehicle_description VARCHAR2(250),
vehicle_type VARCHAR2(250),
is_owner VARCHAR2(250),
driver_passenger VARCHAR2(250),
mileage VARCHAR2(250),
adder_code VARCHAR2(250),
bonus_qualifier_code VARCHAR2(250),
store_accuracy VARCHAR2(250),
store_length VARCHAR2(250),
badge_input_type VARCHAR2(250),
source VARCHAR2(250),
created_by VARCHAR2(250),
created_date_time VARCHAR2(250),
updated_by VARCHAR2(250),
updated_date_time VARCHAR2(250),
approver_badge_id VARCHAR2(250),
approver_name VARCHAR2(250),
orig_guid VARCHAR2(250),
edit_type VARCHAR2(250)
organization external
type ORACLE_LOADER
default directory ETIME_LOAD_DIR
access parameters
RECORDS DELIMITED BY NEWLINE
BADFILE ETIME_LOAD_DIR:'tstlms.bad'
LOGFILE ETIME_LOAD_DIR:'tstlms.log'
READSIZE 1048576
FIELDS TERMINATED BY '|'
MISSING FIELD VALUES ARE NULL(
GUID
,BADGE
,SCHEDULED_STORE_ID
,PARENT_EVENT_ID
,EVENT_ID
,ORGANIZATION_NUMBER
,CUSTOMER_NUMBER
,STORE_NUMBER
,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
,FULL_NAME
,PUNCH_TYPE
,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,EVENT_MEET_SITE_ID
,VEHICLE_NUMBER
,VEHICLE_DESCRIPTION
,VEHICLE_TYPE
,IS_OWNER
,DRIVER_PASSENGER
,MILEAGE
,ADDER_CODE
,BONUS_QUALIFIER_CODE
,STORE_ACCURACY
,STORE_LENGTH
,BADGE_INPUT_TYPE
,SOURCE
,CREATED_BY
,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,UPDATED_BY
,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,APPROVER_BADGE_ID
,APPROVER_NAME
,ORIG_GUID
,EDIT_TYPE
location (ETIME_LOAD_DIR:'tstlms.dat')
reject limit UNLIMITED;
_***Shell Script*:*----------------_*
version=1.0
umask 000
DATE=`date +%Y%m%d%H%M%S`
TIME=`date +"%H%M%S"`
SOURCE=`hostname`
fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
TXT1_PATH=/home/ac1/oracle/in/tsdata
TXT2_PATH=/home/ac2/oracle/in/tsdata
ARCH1_PATH=/home/ac1/oracle/in/tsdata
ARCH2_PATH=/home/ac2/oracle/in/tsdata
DEST_PATH=/home/custom/sched/in
PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
PROGNAME=`basename $0`
PROGPATH=/home/custom/sched/scripts
cd $TXT2_PATH
FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.dat for i in $FILELIST2
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES2 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST1
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES1 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
cd $TXT1_PATH
FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.datfor i in $FILELIST3
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES3 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST4
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES4 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
#connecting to oracle to generate bad files
sqlplus -s $fcp_login<<EOF
select count(*) from rgis_tca_data_ext;
select count(*) from rgis_tca_data_history_ext;
exit;
EOF
#counting the records in files
tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
#updating log table
echo "pl/sql block started"
sqlplus -s $fcp_login<<EOF
define tot_rec_in_tstlms = '$tot_rec_in_tstlms';
define tot_rec_in_tstlmsedits = '$tot_rec_in_tstlmsedits';
define tot_rec_in_tstlms_bad = '$tot_rec_in_tstlms_bad';
define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
define fcp_reqid ='$fcp_reqid';
declare
l_tstlms_file_id number := null;
l_tstlmsedits_file_id number := null;
l_tot_rec_in_tstlms number := 0;
l_tot_rec_in_tstlmsedits number := 0;
l_tot_rec_in_tstlms_bad number := 0;
l_tot_rec_in_tstlmsedits_bad number := 0;
l_request_id fnd_concurrent_requests.request_id%type;
l_start_date fnd_concurrent_requests.actual_start_date%type;
l_end_date fnd_concurrent_requests.actual_completion_date%type;
l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
l_requested_by fnd_concurrent_requests.requested_by%type;
l_requested_date fnd_concurrent_requests.request_date%type;
begin
--getting concurrent request details
begin
SELECT fcp.concurrent_program_name,
fcr.request_id,
fcr.actual_start_date,
fcr.actual_completion_date,
fcr.requested_by,
fcr.request_date
INTO l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
l_requested_by,
l_requested_date
FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
exception
when no_data_found then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log, 'No data found for request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when retrieving request_id request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlms_file_id,
'tstlms.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlms,
&tot_rec_in_tstlms_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlmsedits_file_id,
'tstlmsedits.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlmsedits,
&tot_rec_in_tstlmsedits_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
end;
exit;
EOF
echo "rgis_tca_to_tlms_process.sql started"
sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
exit;
echo "rgis_tca_to_tlms_process.sql ended"
_**Error:*----------------------------------*_
RGIS Scheduling: Version : UNKNOWN
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
TCATLMS module: TCA To TLMS Import Process
Current system time is 18-AUG-2011 06:13:27
COUNT(*)
16
COUNT(*)
25
wc: cannot open /home/custom/sched/in/tstlms.bad
wc: cannot open /home/custom/sched/in/tstlmsedits.bad
pl/sql block started
old 33: AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
new 33: AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
old 63: &tot_rec_in_tstlms,
new 63: 16,
old 64: &tot_rec_in_tstlms_bad,
new 64: ,
old 97: &tot_rec_in_tstlmsedits,
new 97: 25,
old 98: &tot_rec_in_tstlmsedits_bad,
new 98: ,
ERROR at line 64:
ORA-06550: line 64, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql stddev sum variance
execute forall merge time timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string> pipe
<an alternatively-quoted string literal with character set specification>
<an alternatively-q
ORA-06550: line 98, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql st
rgis_tca_to_tlms_process.sql started
old 12: and concurrent_request_id = '&1';
new 12: and concurrent_request_id = '18661823';
old 18: and concurrent_request_id = '&1';
new 18: and concurrent_request_id = '18661823';
old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
declare
ERROR at line 1:
ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
no data found
ORA-06512: at line 59
Executing request completion options...
------------- 1) PRINT -------------
Printing output file.
Request ID : 18661823
Number of copies : 0
Printer : noprint
Finished executing request completion options.
Concurrent request completed successfully
Current system time is 18-AUG-2011 06:13:29
---------------------------------------------------------------------------Hi,
Check the status of the batch in SM35 transaction.
if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
To Release -Shift+F4.
Also you can analyse the job status through F2 button.
Bye -
Auto Increment ID Field Table in the Oracle Database (insert new record)
I have been using the MySQL. And the ID field of the database table is AUTO INCREMENT. When I insert a new record into a database table, I can have a statement like:
public void createThread( String receiver, String sender, String title,
String lastPostMemberName, String threadTopic,
String threadBody, Timestamp threadCreationDate,
Timestamp threadLastPostDate, int threadType,
int threadOption, int threadStatus, int threadViewCount,
int threadReplyCount, int threadDuration )
throws MessageDAOSysExceptionand I do not have to put the ID variable in the above method. The table will give the new record an ID number that is equivalent to the ID number of the last record plus one automatically.
Now, I am inserting a new record into an Oracle database table. I am told that I cannot do what I am used to doing with the MySQL database.
How do I revise the createThread method while I have no idea about what the next sequence number shall be?I am still very confused; in particular, the Java part. Let me try again.
// This part is for the database table creation
-- Component primary key sequence
CREATE SEQUENCE dhsinfo_page_content_seq
START WITH 0;
-- Trigger for updating the Component primary key
CREATE OR REPLACE TRIGGER DHSInfoPageContent_INSERT_TRIGGER
BEFORE INSERT ON DHSInfoPageContent //DHSInfoPageContent is the table name
FOR EACH ROW WHEN (new.ID IS NULL) // ID is the column name for auto increment
BEGIN
SELECT dhsinfo_page_content_seq.Nextval
INTO :ID
FROM DUAL;
END;/I am uncertain what to do with my Java code. (I have been working with the MySQL. Changing to the Oracle makes me very confused.
public void updateContent( int groupID, String pageName, int componentID,
String content, Timestamp contentCreationDate )
throws contentDAOSysException
// The above Java statement does not have a value to insert into the ID column
// in the DHSInfoPageContent table
Connection conn = null;
PreparedStatement stmt = null;
// what to do with the INSERT INTO below. Note the paramether ID.
String insertSQL = "INSERT INTO DHSInfoPageContent( ID, GroupID, Name, ComponentID, Content, CreationDate ) VALUES (?, ?, ?, ?, ?, ?)";
try
conn = DBConnection.getDBConnection();
stmt = conn.prepareStatement( insertSQL );
stmt.setInt( 1, id ); // Is this Java statement redundant?
stmt.setInt( 2, groupID );
stmt.setString( 3, pageName );
stmt.setInt( 4, componentID );
stmt.setString( 5, content );
stmt.setTimestamp( 6, contentCreationDate );
stmt.executeUpdate();
catch
finally -
MO Opearting UNIT value update through back end
Friends,
I have a MO: Default operatng UNIT and MO: operatng UNIT defined in my Oracle apps 11i.
and by mistake i deleted both theses values through system profile options.
and now i would like to update the MO values through back end .
please let me know the table related to these and as which user i have to update.
Regards,
ArunI have a MO: Default operatng UNIT and MO: operatng UNIT defined in my Oracle apps 11i.
and by mistake i deleted both theses values through system profile options.
and now i would like to update the MO values through back end .
please let me know the table related to these and as which user i have to update.When you remove the MO:Operating Unit Profile option value at site level, no user will be able to login to the system and even SYSADMIN user also.
follow the document and see if it helps
How to Restore System Profile 'MO: Operating Unit' When it Has Been Set to Blank Doc ID: 387581.1
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=387581.1
Morever, if you want to prevent this type of mistakes in future,
please follow the below document
How To Prevent the Profile Option MO: Operating Unit being set to NULL at Site Level? Doc ID: 393560.1
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=393560.1 -
Managing users (Back-end v/s Front-end)
h3. Dear oracle gurus
What is the better option ?
creating and managing users(data-entry operator/system operator/managers [100+]) through:
creating users + roles ( complete access/restrictions in Back-end )
or
creating table for log-on/log-off and access of forms (restriction) coded in front-end
Back-end: 10g
Front-end: Form6iHi InoL,
I totally agree with what you wrote, but the licensing part is confusing.
I.m.h.o. there is no link between NUP license and DB defined users.
Oracle will count DB users or Application as the same.
This is what Oracle says:
Licensing a multiplexing environment: If Oracle software is part of an+
environment in which multiplexing hardware or software, such as a TP monitor
or a web server product, is used, then all users must be licensed at the
multiplexing front end. Alternatively, the server on which the Oracle programs
are installed and/or running may be licensed on a per Processor basis. Please
refer to the “Software Investment Guide” for examples.
Can you define your users by name then NUP is an option.
Otherwise it has to be a Processor metric license (as with a public Web application)
Maybe you are looking for
-
Drilling with Discoverer style
Hi all, is there a way to create a report and drill the values of a column through another column as in Discoverer? If i create a dimension I can drill through it, otherwise I have to create a link from a report to another. But if I want to build a r
-
Error on XSLT transforming a SOAP response
Hello, I am getting error ABAP XML formatting error in XML node of type "element", name: "abap" using my XSLT program. Here is the XML response (SOURCE) as string. b60##<?xml version="1.0" encoding="utf-8"?><soapenv:Envelope xmlns:soapenv="http:/
-
I need some help here please. For some reason my iBook has started acting really strangely since last night. Withut any warning, a screen will pop up telling me that I need to restart the computer. Whe I restart it then tells me that Mac OS X has une
-
myy printer stopped printing black, bought new cartridges and tried all the steps on here. please help
-
Looking for fast internal hard drive
I want to replace the existing hard drive in my iBook G4 with a faster one (7200 rpm). I realize the battery time will suffer. Does anybody have any specific recommendations (as to which brand)? Thanks.