Unique ID Creation in MII
Hi,
I need to create an unique ID every time when the BLS will be executed because I wish to use this Unique ID within the BLS. Is there any way to create the unique ID (based on some custom specification like PO0002GI, PO0003GR etc) within MII.
Or is there any way to assign dynamic value to Transaction or Local variable.
Thanks in Advance
Chandan
I am not sure what you trying but I assume you are trying to create some kind of Batch Numbers and may be even looking to send to SAP
I would try these options,
1) Generate unique id's from Javascript and then send to BLS
2) Use SQL to do that job and fetch in BLS
3) Lastly create a separate BLS which generates unique id using some logic and whenever u call, it should return the ID you needed.
If you trying something different please share more details to help you.
Hope this helps!!
Regards,
Adarsh
Similar Messages
-
Update/Insert Problem with Oracle Warehouse Builder
Hello,
i have update/insert problem with owb.
Situation: I have a source-table called s_account and a target table called w_account_d. In the target table are already data which was filled trough the source table inserts. Now anyone make changes on data on the target table. This changes should now give further on the source table with an update operation. But exactly here is the problem i can´t map back the data to source because that will create a loop.
My idea was to set a trigger but i can´t find this component in owb or is anywhere hidden?
Also i have already seen properties as CDC or conditonal loading in the property inspector of the table, but i have no idea how it works.
Give it other possibilities to modeling this case? or can anyone me explain how i can implement this eventually with CDC?
I look forward for your replies :)Hi
thanks for your answer. I follow your suggestion and have set the constraints of both tables into the database directly.Nevertheless it doesn´t work to begin. In the next step i found by right click on a table the listpoint "configure" - I goes to "unique key" --> creation method and set here follow options: Constraint State = ENABLE, Constraint Validation = Validate. That error message that appears before by the deployment disappears yet. Now i start the job to test if the insert/update process works right. Finally it seems to work - but not really.
My Testscenario
1. Load the data from source table about the staging area to data warehouse table: Check - it works!
2. Change one data record in source table
3. Load the source table with changed data record once again to staging area: Check - it works!
4. Load new staging area table with the changed data record to data warehouse table: Check it works! BUT, BUT i can not recognize if it is insert or update operation, then under the design window by jobs execution windows is reported "rows selected 98", Rows inserted" is empty and "rows updated" is empty. So i think works not correct, then my opinion if it works correct it should show be "rows updated" 1.
What can yet now still be wrong or forgotten? Any ideas?
*By the way think not 98 rows there is not important if you make an update or insert which performance. It is an example table the right tables have million of records.*
I look forward for your answers :) -
Flat File Active Sync doesn't work for account creation without unique id
Hi,
I'm trying to set up a FlatFileActiveSync for creation and update of accounts in IDM 7.0. I've followed the below steps for this purpose :-
1) Create a correlation rule (confirmation rule not reqd in my case).
2) Create a proxy admin and assign him a empty form. Also give him control over Top organisation.
3) Create a Flat-File Resource Adapter.
4) Create ActiveSync input form using the (Active Sync) wizard.
5) Start Active Sync...
My feed file contains only 3 fields firstname, lastname, email Id.
My correlation rule has the logic of matching up with IDM accounts(Lighthouse accountId) by taking first letter of firstname and concat with lastname from the data coming from feed file.
Now everything works fine for account updates i.e. if I change somebody's email Id who already exists in IDM I can actually see the changed email Id in Configurator's console.
But if I put in a record that doesn't exist, and which I expect to be created, it gives me an error.
Although, if I introduce a unique identifier in my feed file and link it with Lighthouse.accountId the account creation works fine.
Is this a limitation or I'm not doing something right ?
Exception I saw in resource log with log level 4 :
2007-04-30T10:02:12.291-0400: Error Processing Line: {lastname=Pogu, firstname=Gogu, [email protected]}
com.waveset.adapter.iapi.IAPIException: There was a conflict with the record [{lastname=Pogu, firstname=Gogu, [email protected]}]
and no resolution process has been specified on the adapter.
It is recommended that you define the process for handling unmatched accounts
on this load process.
2007-04-30T10:02:12.292-0400: Poll complete.
2007-04-30T10:02:12.292-0400: SARunner: loop 1076
2007-04-30T10:02:12.314-0400: Started, paused until Mon Apr 30 10:07:12 EDT 2007
2007-04-30T10:07:12.024-0400: Pause completed
2007-04-30T10:07:12.038-0400: Polling
2007-04-30T10:07:12.056-0400: Error Processing Line: {lastname=Poker, firstname=Hoker, [email protected]}
com.waveset.adapter.iapi.IAPIException: There was a conflict with the record [{lastname=Poker, firstname=Hoker, [email protected]}]
and no resolution process has been specified on the adapter.
It is recommended that you define the process for handling unmatched accounts
on this load process.That logic is in my correlation rule as I specified in my initial post and here's the XPRESS code for it :-
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE Rule PUBLIC 'waveset.dtd' 'waveset.dtd'>
<!-- MemberObjectGroups="#ID#Top" description="Find out if a resource account is correlated to an IDM account" id="#ID#D23CC16ECF6E5D42:-4527465C:11224925657:-769F" lastMod="61" lastModifier="Configurator" name="HR_DB_CORR" subtype="SUBTYPE_ACCOUNT_CORRELATION_RULE"-->
<Rule subtype='SUBTYPE_ACCOUNT_CORRELATION_RULE' id='#ID#D23CC16ECF6E5D42:-4527465C:11224925657:-769F' name='HR_DB_CORR' creator='Configurator' createDate='1177449448746' lastModifier='Configurator' lastModDate='1177686884156' lastMod='61'>
<Description>Find out if a resource account is correlated to an IDM account</Description>
<cond>
<and>
<notnull>
<ref>firstname</ref>
</notnull>
<notnull>
<ref>lastname</ref>
</notnull>
</and>
<block>
<concat>
<substr>
<ref>firstname</ref>
<i>0</i>
<i>1</i>
</substr>
<ref>lastname</ref>
</concat>
</block>
<s>false</s>
</cond>
<MemberObjectGroups>
<ObjectRef type='ObjectGroup' id='#ID#Top' name='Top'/>
</MemberObjectGroups>
</Rule>
Although this is not specified in Active Sync input form but in the correlation rule attribute of Active Sync config (using the wizard). Do I need to specify it there using the Field function.
Also, I figured out today that I needed to restart IDM instance after changing the value of "Create Unmatched Accounts" flag and now the error is as below :-
<WavesetResult>
<ResultItem type='error' status='error'>
<ResultError throwable='com.waveset.util.WavesetException'>
<Message id='SES_VIEW_CHECKIN_ERROR'>
</Message>
<StackTrace>com.waveset.util.WavesetException: Unable to checkin view. No account ID specified.
	at com.waveset.view.UserViewer.checkinView(UserViewer.java:1165)
	at com.waveset.object.ViewMaster.checkinView(ViewMaster.java:727)
	at com.waveset.sync.IAPIUserImpl.processCommand(IAPIUserImpl.java:526)
	at com.waveset.sync.IAPIUserImpl.submitCreate(IAPIUserImpl.java:195)
	at com.waveset.sync.IAPIUserImpl.submit(IAPIUserImpl.java:749)
	at com.waveset.adapter.FlatFileActiveSyncAdapter.processLine(FlatFileActiveSyncAdapter.java:404)
	at com.waveset.adapter.FlatFileActiveSyncAdapter.processFlatFile(FlatFileActiveSyncAdapter.java:350)
	at com.waveset.adapter.FlatFileActiveSyncAdapter.poll(FlatFileActiveSyncAdapter.java:307)
	at com.waveset.task.SARunner.doRealWork(SARunner.java:288)
	at com.waveset.task.Executor.execute(Executor.java:154)
	at com.waveset.task.TaskThread.run(TaskThread.java:132)
</StackTrace>
</ResultError>
</ResultItem>
</WavesetResult> -
Auto Batch Creation Unique Fields
Hi All,
I have created 2 UDFu2019s in Marketing Documents ie Attribute1 (32Char) and Attribute2 (32Char). I have made a PurchaseOrder of 6 row Items and entered values to the above UDFu2019s. Now I copy that PO to GoodsReceiptPO and all the data along with the UDF values get copied to GRPO.
In my 6 rows I have same item eg ABC repeated in 3 different random rows. This item is a Batch Management on every Transaction item. When I try to add the GRPO, a Batch setup Screen pops up, Onthis Screen on the upper matrix ie Rows from Documents Matrix you will find the 3 rows of the same Items. When you Click on Automatic Creation button, an Automatic Batch Creation Screen Opens. On this Screen you have 2 fields ie BatchAttribute1 and BatchAttribute2.
What I want is how do I pass the udf values of GRPO ie Attribute1 and Attribute2 value on this Automatic Batch Creation Screen. I am trying it to do with SDK but I cant get the reference of which ROWs Automatic Screen is Opened as all the 3 item are ABC and they are on different different rows in GRPO (say on row1 and row3 and row6) and all these rows have different different values in the UDFs. I need to put the respective values from GRPO to the Automatic Batch Creation Screen BatchAttribute1 and BatchAttribute2 fields.
I hope I am being able to explain you the entire scenario clear to you.
Which is the unique key which is hidden on the Automatic Batch Creation Screen which sap knows and we are unaware. For that reason I explained you the entire scenario. My question was just that sap should let me know which is that field in SAP that is storing and passing the DocNo and Line No of GRPO to the AUTO BATCH Creation Screen. This fields are not visible to us, so how do I get the reference of this field on Auto Batch Creation Field or is there any other field by which sap recognizes during the time of Auto batch Creation. Would really appreciate you kind support for the same.
Regards,
MurtazaDear Xiaodan AN,
Thanks for your suggestion, I tied it but doesnu2019t seem to work. I am on form no 65053 which is the Auto Batch Creation Screen and I need to copy the GRPO Attribute UDF into AutoBatchCreate Attribute Field as per my scenario given above.
Here is my code of Item Event,
If pVal.FormType = "65053" and pVal.EventType = SAPbouiCOM.BoEventTypes.et_CLICK And pVal.ItemUID = "37" And pVal.BeforeAction = True Then
CopyAttribute(pVal, BubbleEvent)
End If
Public Sub CopyAttribute(ByRef pVal As SAPbouiCOM.ItemEvent, ByRef BubbleEvent As Boolean)
Dim frmAutoBatchNo As SAPbouiCOM.Form
frmAutoBatchNo = My_Application.Forms.GetForm("65053", 1)
Dim rowno As String
rowno = pVal.Row
End Sub
As per your suggestion I tried to get the row of item event but for all the 3 items I am getting the same value of rowno = -1
Can you futher explain me with an code
Kind Regards,
Murtaza -
Prime Creation device failed, IP addres is not unique
Hi!
When i try add new device in Prime 1.3 I get the following message:
Add Device xx.xx.xx.xx Creation Failed. IP Address is not unique. Address exist as an interface-INVENTORY_SERVICE_110
but this address *is unique*, this address is not exist in Prime. I have no interface or device with this address in Prime.
I read logs, but i did not find anything there.
Somebody had the same problem? How i can resolve it?Hi,
Sometime I had the same problem. The device I was trying to add to Prime was already added
through another ip address, which was discovered through a process of automatic discovery. In my case corresponded to a loopback IP. Try searching your device by its MAC address.
Regards.- -
Creation of Custom table in MII database.. Recommended?
Hi All,
In MII 12.0.6, I came with a requirement where I need to store some data in a table. As MII 12 runs on netweaver so naturally it has a associated database. Can we use that MII schema (or some new schema in that netweaver database) to create some custom table, to store some data. I am planning to connect that table by creating a IDBC connector and sql query.
Thanks,
SoumenI would not create a custom table in the same schema, but I see no reason why you couldn't create another schema in the same database server to hold your custom content.
-
Creation of movt.type - unique requirement
Dear Friends,
Here i am enclosing my unique requirement to create a movement type. In sales returns PGR (Post goods receipt) done automatically material document will be updated. This is value is picks in material master, this is normal process.
In normal process PGR done the fi entry is
Finished Goods Dr Eg. Rs.1000
COGS Cr Eg. Rs.1000
My requirement is to create a new movement type which can not hit regular GL both the Finished goods and COGS (which is mentioned in the above entry) when PGR done, to hit separate GL. Kindly tell me how to create a new movement type which is supports the above requirement and where i can assign new GL.
Pls tell me its very urgent.I dont think its possible, as what u r thinking
Movement type only gives u the transaction keys
and in OBYC this transaction keys gl accounts are attached.
more over u can not make ur own movement type. U will have to create movement type by coping the std movement type only .
so when u copy this transaction keys are also copied. and this u cant change
look in OMWN u cant chang any thing hear.
so i think its not posible to by creating new movement type
hope this cleares
reward if useful -
Creation of Unique Number Ranges for Organizational Objects
Hi Gurus,
As per our client's requirements, We need to create unique number ranges for Organizational Objects such as O,S,P to distinguish based on the Object ID. Could you please help me out to create the same.
And also to create unique number ranges for Personnel Development Objects such as QK & Q.
Thanks in Advance
Regards
Vinoth Kumar.RHi,
To maintain object specific number ranges, go to customising node Personnel management --> Org. management --> Basic settings --> Number range manitenance --> Maintain Number ranges
Here there will already be a default entry as $$$$.
To create different range for object say O you can use the following options:
Create entry as $$O - number range applicable to all Org units across all plan versions.
Create entry as 01O - number range applicable to all Org units under plan version "01"
Then under Interval maintenance create the desired number ranges under IN for internal or EX for external (check the external tick box for external number ranges)
In the similar manner you can create the number ranges for objects C, S , Q & QK.
Hope this is helpful.
Regards,
Shreyasi. -
Webbased Creation of unique user account
Cant find any solution on where to get studens to write in there own credentials on a webbased form, then get the Open Directory accept this information.
Please help if you know a solution on this topic!http://sourceforge.net/projects/osxpass/
HTH
-Ralph -
Problem in creation of HUs through FM HU_REPACK and HU_POST
Hi Experts
This is regarding the problem in creation of HUs with function modules HU_REPACK and HU_POST.
I am sending two Unique Sources HUs with single destination HU for the same material and batch into HU_REPACK and it shows that it is successfully repacked and soon as the HU_REPACK function module passess I have called HU_POST without any parameters except messages..As the data will be picked up auomatically from the internal memory as these function module are related same function group.
Even the HU_POST FM passes successfully and also it creates a Transfer order but the problem is that THe HU is created with the MULTIPLE LINE ITEMS for the same material and Batch which should not be done at all .
Please find the example of the HU created in the system
0 10020000038479 S-DISP 1 EA S DISPNSARY
1 0024632192 000004 810062 0.250 KG L0533A4172 Lactose
1 0024632192 000004 810062 24.900 KG L0533A4172 Lactose
Material is 810062
Batch is L0533A4172
Can you please assist me how to acheive single line item HU when we are trying to repack the data for same material and batch as it does while you create the HU through Manual processing i.e HU02 etcHi Sailaja,
I have similar requirements like this before with HU_PACKING_AND_UNPACKING FM. It was a tough debugging before I came up with the right solution. Unfortunately, I was not able to document that code..
But here is what I have been doing.
Youre on track with the solution below, never use BDC as it will only limit the Handling units that you wish to put into.
The function modules that starts with V5_ plays important role as you need to initialize the global variables and table of this function group with values before calling HU_PACKING_AND_UNPACKING FM
Debug inside the Function module, and look for the FM that returns SY-SUBRC <> 0 then set a breakpoint.
Restart the program then debug inside that FM again (and I do not want to go further on the details, I give you the presumption of literacy)
You will find some items that has no value, try to initialize everything by utilizing the FM for this function group V51S.
Happy Debugging. -
Problem in Creation of Handling Units through HU_REPACK and HU_POST FM
Hi Experts
This is regarding the problem in creation of HUs with function modules HU_REPACK and HU_POST.
I am sending two Unique Sources HUs with single destination HU for the same material and batch into HU_REPACK and it shows that it is successfully repacked and soon as the HU_REPACK function module passess I have called HU_POST without any parameters except messages..As the data will be picked up auomatically from the internal memory as these function module are related same function group.
Even the HU_POST FM passes successfully and also it creates a Transfer order but the problem is that THe HU is created with the MULTIPLE LINE ITEMS for the same material and Batch which should not be done at all .
Please find the example of the HU created in the system
0 10020000038479 S-DISP 1 EA S DISPNSARY
1 0024632192 000004 810062 0.250 KG L0533A4172 Lactose
1 0024632192 000004 810062 24.900 KG L0533A4172 Lactose
Material is 810062
Batch is L0533A4172
Can you please assist me how to acheive single line item HU when we are trying to repack the data for same material and batch as it does while you create the HU through Manual processing i.e HU02 etc*
-
Help needed for automatic batch creation for by-products(PP-PI)
Hi,
In the Goods Recpt tab in transaction COR1, we have a field for creating a batch manually for the header material. On clicking on create, a pop up is generated whch facilitates creation of batch for the header material.
At this point, if i go to the material list, I would like to see the same batch number ( same as header) written against the BY-Products.
Essentially, when i create a batch for the header material, it should also create a batch with the same number for the by-product and populate the same in the materials tab.
One important thing to note here is that when we click on create batch icon, the data is stored in a structure and the batch is actually created only when the process order is saved.
Please advise if this can be achived by some standard settings in SAP or do we need to change the standard code. If later is the case, please advise about the point where to insert the required code. I have found a user exit at the point where batch numbe is created manually.
Please advise.As per SAP Batch definition Batch is the quantity or partial quantity of a certain material or product that has been produced according to the same recipe, and represents one homogenous, non- reproducible unit with unique specifications. So logically you cannot have same batch number for two materials.
If you want to find out the relation between batches of header material and by product that is produced during production, you can use Batch Information Cockpit.
To see batch details in Batch Cockpit (T-code: BMBC), You have activate Batch where used function in IMG using path : Batch management-->Batch Where used list --> make setting for batch where used list.
Regards,
Sachin -
Do Not Check Uniqueness of Data in Write Optimised DSO
Hello,
I am working with write optimised DSO with already billion records in it. I have flag 'Do Not Check Uniqueness of Data' in Settings as checked(means it will surely not check for uniqueness of data). I am thinking of removing this flag off and activate the DSO again. I am willing to remove this flag as it will provide option of Req ID input in listcube on DSO and without this list cube will never return back with results.(I have to analyze aggregations)
I tried removing this flag and then activate DSO in production system with 17 million records which took 5 mins for activation (index creation). So maths says for a billion records activation transport will take around 6 hrs for moving to Production.
I am willing to remove this flag as it will provide option of Req ID input in listcube and without this list cube will never return back with results.
Questions:
How does this flag checks the uniqueness of record? WIll it check Active table or from the Index?
To what extent DTP will slow down the process of subsequent data load?
Any other factors/risks/precautions to be taken?
Let me know if questions are not clea or further inputs are required from my side.
Thanks.
AgastiHi,
Please go through the site :
/people/martin.mouilpadeti/blog/2007/08/24/sap-netweaver-70-bi-new-datastore-write-optimized-dso
As far as your ques are concerned... i hope above blog will answer most of it , and if it does'nt please read below mentioned thread.
Use of setting "Do Not Check Uniqueness of Data" for Write Optimized DSO
Regards
Raj -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Unique ID generation in OIM 11.1.1.5
Dear All,
I would like to generate the automatic unique ID during the user creation in OIM 11.1.1.5. Can any one please suggest me to do the customization for it and share some documents which can be helpful to me.
Thanks
Harry
Edited by: Harry-Harry on Nov 5, 2012 12:35 AMIf you have specific format for the ID generation then you will have to use post-process event handler for this.
In this case 1st you will process record with some random ID and afterword will process it to get required formatted id.
Refer
http://docs.oracle.com/cd/E14571_01/doc.1111/e14309/oper.htm
Maybe you are looking for
-
Sustitution for Functional Area in CO
Hi, I have defined a substitution step for functional area. When I run activity allocation I receive a messge (KI176) that the functional area has been changed. Despite this, when I review the final document, the functional area is the standard pre-s
-
Parse XML by using NSXMLDocument
I know how to parse external file or an URL XML, but what if I have a string - NSString that contains XML data, how should I do to let NSXMLDocument prase this string? thank you. G5 Mac OS X (10.4)
-
Advantage of using SAP XI over webservices to interact with Biztalk
HI everyone, We have biztalk as integration tool. I would like to know is it feasible to implement Biztalk with SAP XI to communicate with our SAP ECC server. I heard it will be over weight to use both Biztalk and SAP XI coz each will have its own me
-
Can I run javascript from a plug-in ? I'd like to change some document features when my plug-in finishes. Is this possible ? If so how ? Thanks, Matt
-
IPod showing Apple and dock connector icon
I updated both iTunes and my iPod software this evening. The iPod had been working prior to this update and I had just sync-ed it with iTunes before this fiasco. After the software updates, the iPod will not connect via the USB connection on my Windo