EM 12.1.0.2 Hundreds of Records in Named Credentials?
Hello:
I started looking for notes on MOS when I could not see the list of named credentials I had added to EM. This is a fresh install of EM 12.1.0.2, with an 11.2.0.2.11 repository database. All the EM components and monitored targets are on rhel 5 x86-64. There is one MOS note (Missing Named Credentials in the Jobs Drop Down List in Cloud Control 12c (Doc ID 1493690.1) from December 2012. The note says to increase the number that may be stored as named credentials, because table sysman.em_nc_creds has more records than 50, which is the default number of credentials.
However, table sysman.em_nc_creds has hundreds of records, and I only set one named credential so far through the EM API. Table sysman.em_nc_creds is populated with all the targets that were added when 12c agents were deployed to new hosts. It does not matter if the 12c agents were deployed using Setup > Add Targets > Add Targets Manually > Add Host Targets, or if server-side scripts were executed from the new nodes to EM. The table has a named credential for every target discovered by EM.
My questions are: is it expected for EM to add named credentials on its own? If so, is it reasonable for me to set the number of named credentials to (say) 2000, which should* cover every current and future target I may register with EM? Last: should every named credential stored in sysman.em_nc_creds be visible to its owner using the EM API?
Thank you for your help,
Laura Sallwasser
Hello Rob,
The following document ID's describe some various causes for the error message string "OMSCA-ERR:Configuring WebTier failed.".
1391825.1
1488805.1
1500231.1
1530611.1
1537296.1
In my case the cause described with document ID 1537296.1 was the one that was applicable.
So, after deleting all files of the form pki* from the /tmp directory (or the specified TMP directory) and pressing "Retry" of the installation wizard, the installation finished successfully :-).
I hope this information will be helpful for you.
Best regards
Stephan
Similar Messages
-
IN XML PUBLISHER REPORT WHICH SHOWS 5 RECORDS PER PAGE REG:-
{color:#0000ff}Dear All,
Conditionally, i want my XML Report has to be Display only 5 Records per Page..
If there are hundreds of records in XML file.
Please, could u help me out..
Waiting for ur reply..
Regards,
Sarfraz.
{color}For 11i, pl see the XML Publisher User Guide at http://download.oracle.com/docs/cd/B25516_18/current/html/docset.html on how to achieve this
HTH
Srini -
How to load a flat file with lot of records
Hi,
I am trying to load a flat file with hundreds of records into an apps table. when i create the process and deploy it onto the console it asks for an input in an html form. why does it ask for an input when i have specified the input file directory in my process? is there any way around tis where in it just reads all the records from the flat file directly??is custom queues anyway related to what I am about to do?any documents on this process will be greatly appreciated.If any one can help me on this it will be great. thank you guys....After deploying it, do you see if it is active and the status is on from the BPEL console BPEL Process tab? It should not come up to ask for input unless you are clicking it from the Dashboard tab. Do not click it from the Dashboard. Instead you should put some files into the input driectory. Wait few seconds you should see the instances of the BPEL process is created and start to process the files asynchrously.
-
Hi,
I have inserted the following XML document in to a table with an XMLType column:
<?xml version="1.0" encoding="UTF-8"?>
<PhotoImaging xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlns.oracle.com/xdb/photoImaging.xsd">
<EmpData>
<VRN>1111</VRN>
<Surname>Shah</Surname>
<Forename>Sunil</Forename>
<PayNo>1234</PayNo>
<WarrantNo>1234</WarrantNo>
<ImageFile>c:\sunil.jpg</ImageFile>
<ImageDate>12-12-04</ImageDate>
</EmpData>
<EmpData>
<VRN>2222</VRN>
<Surname>Malde</Surname>
<Forename>Kalpa</Forename>
<PayNo>5678</PayNo>
<WarrantNo>5678</WarrantNo>
<ImageFile>c:\kalpa.jpg</ImageFile>
<ImageDate>12-12-05</ImageDate>
</EmpData>
</PhotoImaging>
This is just a simple XML document with 2 records.
The problem I am having is when trying to query this document to retrieve, say, all the Surname's.
I can use:
select extractvalue(xml, 'PhotoImaging/EmpData/Surname') from mphr2_photo_xml
This works well if the XML document only contains one record, but in my case the XML document will contain hundreds of records and 'extractValue' can only return one value.
Does anyone know of a way to query this XML document and return all the Surname's??
Thanks, SunilIt's OK. I got it from looking at some of the other threads.
For information the SQL is as follows:
select extractValue(value(q), '//Surname')
from mphr2_photo_xml,
table (xmlsequence(extract(xml, '/PhotoImaging/EmpData'))) q -
DNS Scavenging - Which Record are scavenged?
I am about to enable scavenging in a domain that has never had scavenging enabled properly. There are hundreds of records with old time stamps. We have done our due diligence in researching records to disable deleting the old record if it has
an old time stamp. Previous admin's would let a server grab a DHCP server and then static IP the DHCP address.
I know that Event ID 2501 will give me a summary of how many records were scavenged. I seem to remember that (its been a while since I have been in a mess like this), there is a way to get a list/log of the records that were scavenged. I hope
we have all the records set, but I the first scavenging period may be painful.
Is there a way to get a list of each record that was scavenged?You might want to setup DHCP credentials and add the DHCP server to the DnsUpdateProxy group. This way it will update the IP of the host instead of creating another one.
And you really don't want to go below 24 hours with a lease, because technically scavenging is in multiple of days. And you must set the scavenging NOREFRESH and REFRESH values
combined to be equal or greater than the DHCP Lease length.
DHCP DNS Update summary:
- Configure DHCP Credentials.
The credentials only need to be a plain-Jane, non-administrator, user account.
But give it a really strong password.
- Set DHCP to update everything, whether the clients can or cannot.
- Set the zone for Secure & Unsecure Updates. Do not leave it Unsecure Only.
- Add the DHCP server(s) computer account to the Active Directory, Built-In DnsUpdateProxy security group.
Make sure ALL other non-DHCP servers are NOT in the DnsUpdateProxy group.
For example, some folks believe that the DNS servers or other DCs not be
running DHCP should be in it.
They must be removed or it won't work.
Make sure that NO user accounts are in that group, either.
(I hope that's crystal clear - you would be surprised how many
will respond asking if the DHCP credentials should be in this group.)
- On Windows 2008 R2 or newer, DISABLE Name Protection.
- If DHCP is co-located on a Windows 2008 R2, Windows 2012, Windows 2012 R2,
or NEWER DC, you can and must secure the DnsUpdateProxy group by running
the following command:
dnscmd /config /OpenAclOnProxyUpdates 0
- Configure Scavenging on ONLY one DNS server. What it scavenges will replicate to others anyway.
- Set the scavenging NOREFRESH and REFRESH values combined to be equal or greater than the DHCP Lease length.
More info:
This blog covers the following:
DHCP Service Configuration, Dynamic DNS Updates, Scavenging, Static Entries, Timestamps, DnsUpdateProxy Group, DHCP Credentials, prevent duplicate DNS records, DHCP has a "pen" icon, and more...
Published by Ace Fekay, MCT, MVP DS on Aug 20, 2009 at 10:36 AM 3758 2
http://blogs.msmvps.com/acefekay/2009/08/20/dhcp-dynamic-dns-updates-scavenging-static-entries-amp-timestamps-and-the-dnsproxyupdate-group/
I also recommend reviewing the discussion in the link below:
Technet thread: "DNS Scavenging "
https://social.technet.microsoft.com/Forums/windowsserver/en-US/334973fd-52b4-49fc-b1d8-9403a9481392/dns-scavenging
Some other things to keep in mind with registration and ownership to help eliminate duplicate DNS host records registered by DHCP:
=====================================================
1. By default, Windows 2000 and newer statically configured machines will
register their own A record (hostname) and PTR (reverse entry) into DNS.
2. If set to DHCP, a Windows 2000, 2003 or XP machine, will request DHCP to allow
the machine itself to register its own A (forward entry) record, but DHCP will register its PTR
(reverse entry) record.
3. If Windows 2008/Vista, or newer, the DHCP server always registers and updates client information in DNS.
Note: "This is a modified configuration supported for DHCP servers
running Windows Server 2008 and DHCP clients. In this mode,
the DHCP server always performs updates of the client's FQDN,
leased IP address information, and both its host (A) and
pointer (PTR) resource records, regardless of whether the
client has requested to perform its own updates."
Quoted from, and more info on this, see:
http://technet.microsoft.com/en-us/library/dd145315(v=WS.10).aspx
4. The entity that registers the record in DNS, owns the record.
Note "With secure dynamic update, only the computers and users you specify
in an ACL can create or modify dnsNode objects within the zone.
By default, the ACL gives Create permission to all members of the
Authenticated User group, the group of all authenticated computers
and users in an Active Directory forest. This means that any
authenticated user or computer can create a new object in the zone.
Also by default, the creator owns the new object and is given full control of it."
Quoted from, and more info on this:
http://technet.microsoft.com/en-us/library/cc961412.aspx
=====================================================
Ace Fekay
MVP, MCT, MCSE 2012, MCITP EA & MCTS Windows 2008/R2, Exchange 2013, 2010 EA & 2007, MCSE & MCSA 2003/2000, MCSA Messaging 2003
Microsoft Certified Trainer
Microsoft MVP - Directory Services
Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
This posting is provided AS-IS with no warranties or guarantees and confers no rights. -
Change Request for records in table
Dear all,
I want to create a Change Request to include all the records in my Z table, by using the SM30,
I don't want to change record by record, as the table includes hundreds of record.
Could any body tell me?
Best Regads,
JackHi Jack,
Including Table Entries in a Transport Request
Enter a transport request.
Select the table entries you wish to transport.
Assign the selected table entries to the transport request using the Include in request function.
This marks the entries for inclusion in the transport request.
Save your changes.
The selected table entries are added to the transport request.
You can display all table entries which are either marked for inclusion or already included in the transport request by choosing Choose ® All in request.
You can display all table entries which are not included in the transport request by choosing Choose ® All not in request.
Regards,
Deepak Kori -
Hi All,
I am newbie, I have one question. suppose there is One table containing hundreds of records and now i have
added DATE column in that table and Now I have to insert the sysdate in that column for all rows.
Then how can i do it by executing a single query at a timeWhile adding column to a table, you can give default value to be set for all previous records with ALTER TABLE command
alter table employee add cur_date date default sysdate;But if table has been already altered, then you need to UPDATE the old records with default value
UPDATE mytable
set datecol = sysdate
where datcol is null -
Batch delete custom obeject records
Hi expert,
we accidently imported hundreds of records into custom object. I undertsand batch delete function does not apply to custom object. Is there any other alterntaive to batch delete these unwanted records? (besides manually delete one by one... :P)
Thanks, sabhello Bob,
The customer care replied they don't know when this patch will apply to our environment, is there anyway we can push this to be avialble asap?
The oracle customer care's reply is as follows:
1. Web Services Support for Custom Object 3 will be available in the new Patch 931.0.03 Unfortunately we don't have any information regarding the date of deployment for this patch for your environment. 2. An Enhancement Request,3-600732901, has been logged with the Siebel Support Engineers regarding this issue. Please be assured that development and engineering will analyze this request and will take the appropriate action towards implementation if it meets their approval. We appreciate all customer feedback and will work hard to implement as much as we can into our product. However, we are currently unable to provide you an estimated time of implementation while your request is being analyzed and processed. Please review the Training and Support portal for future Release Notes indicating the most current product updates or contact Professional Services for a custom solution.
Thanks, Sab. -
DNS records not always updating / up to date / correct
Hi
We have a local domain with a primary DC running Windows Server 2008 R2 along with DHCP and DNS (AD integrated), and a secondary backup DC running Windows Server 2008 (non-R2), however it has been off for a very long time due to malfunctioning
hardware. We also have another domain on the same LAN in a different forest altogether which has a trust set up between these two domains. This other domain has a Windows Server 2008 R2 as the primary DC and utilizes the first mentioned DC for DHCP
as well. The LAN has physical Ethernet connectivity and WiFi as there are mostly laptops as workstations. DCHP leases are set to 24 hours and DNS Aging and Savaging is configured for both domains.
I have been troubled by an issue for some time now where in some cases there is a mismatch with the IP a laptop has and what DNS has captured. I tried to reproduce the issue but I am unable to do so: I would connect a laptop via cable, then switch
to WiFi and then back again, each time DNS gets updated accordingly. I have tried the same tests with both connected simultaneously while switching between them as well.
What is curious is that I experience this intermitted issue in both DNS forward lookup zones for the respective domains. Keep in mind as I said previously that these are completely separate domains from different forests with a trust
configured between them. Other than the trust which they have in common, the DHCP is also in common which makes me suspect the issue is related to it. I have configured the IPv4 DCHP setting: "Always dynamically update DNS A and PTR records"
and ensured credentials were set which will be used to register and update records on behalf of clients.
This issue is causing problems with internet access through our hardware firewall as the sessions are dependant on the accuracy of DNS. Please, could anyone try and assist me. Thank you in advance.I have made an interesting discovery. At one point yesterday client was connected to both cable and wifi which means DCHP had 2 leases for each adapter respectively. Later in the day wifi was completely turned off though a physical toggle switch on the
machine and only plugged in via cable. Further in the day I had the user complain that there is no internet connectivity (remember I said our firewall is dependant on the DNS' accurancy). Upon inspection I found that DNS had the IP of the wifi adapter and
not the one from cable, yet the wifi adapter is turned off completely for a couple of hours. I ran ipconfig /registerdns which corrected the A record, however only for about an hour after which the wifi IP overwrote the record again. I ended up deleting
the wifi adapter's DHCP lease late in the afternoon after which we all left for the day not too long thereafter. In the morning I had the same complaint of no internet access to again find that the cable IP is not reflecting in DNS for the client's A record
and now found a completely new IP which it did not have before, yet according to DCHP, the MAC address matched the wifi adapter. When I checked, the wifi adapter was still completely off. Somehow someway the record keeps on being overwritten by something (suspect
DHCP), even if the other adapter is completely turned off?! -
Agent Desktop Recording and Silent Monitoring with IP Communicator.
Reading through the forums I have seen several posts which make me think this should work, but I can't seem to get silent monitoring or recording using the agent desktop to work when the agent is connected through IP communicator. Currently I have help desk agents using extension mobility to log into 7962s that are connected to thier desktops running agent desktop connecting to UCCX 7.01. Silent monitoring and recording work fine with thier hard phones. When I install IP communicator on the PC and log into it using EM, the agent desktop takes control of the IPC just fine and will distribute calls to it, but my recordings are blank and silent monitoring from an supervisor station fails to initialize. Is there something I am missing in the configuration that is special when using IPC instead of a hard phone? Thanks in advance.
Couple of things I've learned about the CIPC and monitoring/recording:
No named devices. Use the SEP + Mac Address of the local Ethernet interface.
Ensure the Ethernet interface can be put into permiscuous mode.
Ensure you are NOT using a shared line appearance for the IPCC Extension.
If you are using CAD to do the monitoring/recording, launch the CIPC before you launch CAD
If you are using SPAN, ensure the CIPC RTP traffic will traverse the network where the SPAN interface is located.
If you are calling phone-to-phone, know that the CIPC will attempt to negoitiate G.722. UCCX cannot monitor/record G.722. Set the region or call to the PSTN where you can guarantee a G.711 or G.729 call. -
Matching score for new records added to existing workflow
Hi SDNers,
My doubt is:
I have a 2 Workflows which are already in process. The triggering event for them is Manual, Record Import.
Now I manually assigned 20 records to "Workflow A" based on Condition 1
Also, I manually assigned 20 records to "Workflow B" based on Condition 2
I am importing 30 new records. Based on the condition, I want to assign these records to the existing Workflow A / Workflow B.
Note: There is a Match stencil, so the newly created records have to be matched against the existing records in the present Workflow itself.
Is it possible to add new records o existing workflow manually?
Also, what about the Matching score? will the records be matched?
Thanks & Regards,
PritiHi Priti,
I tried restricting records using Named Searhes and Masks but it includes all the records with Match step property Records Vs All. You have to perform some maunal step either by selecting records using some search criteria or you can use Named Searches i.e.
1. Create one field say New of type boolean and by default set it to NO.
2. Create one named search for this field value to YES.
3. Create one assignment which sets the value for this field to YES and add this assignment in the workflow as the first step.
4. When ever you import records, assignment will set New=YES for all the records imported. Now, when you add more records, search the previous records using Restore Named Search function which will give the list of records imported now. You can perform Matching and Merging operation.
5. Add one more assignment to the workflow as the last step which should set New=NO so that records should not appear next time for Matching
Regards,
Jitesh Talreja -
How to perform simultaneous multitrack recording with an Audigy 4 P
Dear Sir/Madam,
We are comming from the Uni'versity of Maribor, Slovenia. We recently buyed an Audigy 4 Pro Sound Blaster for our studio recording purposes. Here is the description of the problem:
We would like to record the speech database which will be used later to perform text to speech sinthesis (sampling rate of 96kHz, 24-bit resolution). Therefore, we need to SIMULTANEOUSLY record the stereo signals from 2 microphones located at different distances. That is, for each microphone we would have two channels which resulted in a requirement of recording 4 channels symultaneously!!!
We succeded to record the signals from either first or second microphone separately, but we couldn't record the stereo signals from the both microphones simultaneously!!! Namely, in the Creative mixer there is possible to select ONLY one recording input in a time. But we need to record the signal simultaneously from two inputs.
Our question are:
. How can we perform simultaneous, multitrack recording with an Audigy 4 Pro using two stereo inputs (e.g. from Line In and Line In 2 inputs)?
2. Which -from Creative recommended- recording software supports this multritrack recording? Namely, the recordings from the two microphones should be aligned in time as much as possible.
Thank you very much for your help!
AltairHi
You need recording software (Cubasis (not sure), Cubase, Sonar, etc.) which supports
ASIO drivers and a 'multichannel' microphone pre-amp/mixer to get microphone(s) connected
'separately' into Audigy Input's.
ASIO driver gives more 'Input sources' to choose from compared to WDM's one stereo/2 mono
channel capability.
.jtp -
(OSStatus error -108.) | Quicktime Screen Recording
The operation couldn’t be completed. (OSStatus error -108.)
I receive this error message when attempting to use the "screen recording" function of Quicktime X. Cannot find any information on this specific error besides the fact that it is due to insufficient memory, although I don't believe I actually have a memory issue. The "movie recording" and "audio recording" functions still work. Tried another screen recording program named Voila, and while I can begin to record, after I stop recording I end up with a corrupted file. Strange thing is, I tried a third screen recording program, Debut, a free program, and can successfully record my screen (although the video is choppy/essentially unusable). I have been having trouble with my MacBook since I installed a CUDA driver (now deleted), and so I believe this could be the actual root of the problem. Do I need to install a different graphics driver? Any insight is greatly appreciated.
Running Snow Leopard 10.6.8; 2.66 GHz Intel core i7; 4 GB 1067 MHz DDR3; MacBook Pro from 2010.
Intel HD Graphics:
Chipset Model: Intel HD Graphics
Type: GPU
Bus: Built-In
VRAM (Total): 288 MB
Vendor: Intel (0x8086)
Device ID: 0x0046
Revision ID: 0x0018
gMux Version: 1.9.21
Displays:
Display Connector:
Status: No Display Connected
NVIDIA GeForce GT 330M:
Chipset Model: NVIDIA GeForce GT 330M
Type: GPU
Bus: PCIe
PCIe Lane Width: x16
VRAM (Total): 512 MB
Vendor: NVIDIA (0x10de)
Device ID: 0x0a29
Revision ID: 0x00a2
ROM Revision: 3560
gMux Version: 1.9.21
Displays:
Color LCD:
Resolution: 1680 x 1050
Pixel Depth: 32-Bit Color (ARGB8888)
Main Display: Yes
Mirror: Off
Online: Yes
Built-In: Yes
Display Connector:
Status: No Display Connectedpeljmies
You have resurrected an ancient thread by posting in it, and since there was no answer in IT, it may be that your Question would benefit from starting an New Question!!
I would title it something like " QT screen recording crashing with alert "The operation couldn’t be completed. (OSStatus error -536870186.)" "
(Before you post, double check the error alert message text)
Include as much of the following as possible:
Quoted from Apple's "How to write a good question"
To help other members answer your question, give as many details as you can.
Include your product name and specs such as processor speed, memory, and storage capacity. Please do not include your Serial Number, IMEI, MEID, or other personal information.
Provide the version numbers of your operating system and relevant applications, for example "iOS 6.0.3" or "iPhoto 9.1.2".
Describe the problem, and include any details about what seems to cause it.
List any troubleshooting steps you've already tried, or temporary fixes you've discovered.
CCC -
Webi base on BW query based on Infoset comes back with wrong values
Hello:
I have a Webi report on an OLAP Universe, on Bex Query.
The bex query is based on 1 INFOSET.
When I run the webi report the values that comes back a totally wrong values.
The correct results are not brought back.
When I do a test with crystal reports to the infoset, I can see the correct values. (I'm aware crystal uses different drivers).
The test is BW query designer, and crystal brings back proper values from Infoset based BW query.
Webi on Infoset based BW Query comes back with wrong data.
query is simple.
Pull in 3 attributes, 3 key figures, where componentkey = "111"
I get 36 rows in Crystal and 36 rows in Bex Analyzer, 36 rows in BW query designer (web analyzer).
I get many rows in webi (almost like a CARTENSIAN product).
I search a round this forum but still did not see a conclusive answer to this problem.
But I see another thread that several others faced this same issue without a resolution.
My environment.
BOE XI 3.1 SP2
No fix packs
SAP IK 3.1 SP2
HPUX-IA64
Thanks in advance for any help.
DwayneWas this problem ever solved?
I am having a similar problem with an infoset based query.
I have created the BW infoset, and confirmed that the correct data is returned from the underlying infoproviders. A simple BW query on that infoset yields the same results.
Create the universe, and then the WEBI, and WEBI now shows hundreds of records where I expect 10. Data is being returned in WEBI that definitely shouldn't be there. It's almost like the restrictions applied in the characteristic restriction area of my BW query are being ignored, even if I include them in the WEBI query.
Cheers,
Andrew -
BTREE and duplicate data items : over 300 people read this,nobody answers?
I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
while (i < hcp->dup_tlen) {
memcpy(&len, data, sizeof(db_indx_t));
data += sizeof(db_indx_t);
DB_SET_DBT(cur, data, len);
* If we find an exact match, we're done. If in a sorted
* duplicate set and the item is larger than our test item,
* we're done. In the latter case, if permitting partial
* matches, it's not a failure.
*cmpp = func(dbp, dbt, &cur);
if (*cmpp == 0)
break;
if (*cmpp < 0 && dbp->dup_compare != NULL) {
if (flags == DB_GET_BOTH_RANGE)
*cmpp = 0;
break;
What's the expert opinion on this subject?
Vincent
Message was edited by:
user552628Hi,
The special thing about it is that with a given key,
there can be a LOT of associated data, thousands to
tens of thousands. To illustrate, a btree with a 8192
byte page size has 3 levels, 0 overflow pages and
35208 duplicate pages!
In other words, my keys have a large "fan-out". Note
that I wrote "can", since some keys only have a few
dozen or so associated data items.
So I configure the b-tree for DB_DUPSORT. The default
lexical ordering with set_dup_compare is OK, so I
don't touch that. I'm getting the data items sorted
as a bonus, but I don't need that in my application.
However, I'm seeing very poor "put (DB_NODUPDATA)
performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
Setting the cache and the page size to their ideal values is a process of experimenting.
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
While there may be a lot of reasons for this anomaly,
I suspect BDB spends a lot of time tracking down
duplicate data items.
I wonder if in my case it would be more efficient to
have a b-tree with as key the combined (4 byte
integer, 8 byte integer) and a zero-length or
1-length dummy data (in case zero-length is not an
option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
You can have records with a zero-length data portion.
Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
Another possibility would be to just add all the
data integers as a single big giant data blob item
associated with a single (unique) key. But maybe this
is just doing what BDB does... and would probably
exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
Or, the slowdown is a BTREE thing and I could use a
hash table instead. In fact, what I don't know is how
duplicate pages influence insertion speed. But the
BDB source code indicates that in contrast to BTREE
the duplicate search in a hash table is LINEAR (!!!)
which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
Regards,
Andrei
Maybe you are looking for
-
What is the best way to connect Cambridge DAC to imac
What is the best way of connecting a Cambridge Audio DacMagic to the imac without losing sound quality?
-
I apologize if this seems like a duplicate question, but I've read many postings on this topic and nobody seems to have this exact scenario. During live performance using MainStage, I need to trigger a short audio file (a few seconds long) as an endl
-
How to make close last tab actually open SuperStart rather then closing Firefox?
Is there a way to make Firefox (or SuperStart) NOT close Firefox when the last tab is closed? I'd prefer for it to act like Opera in that it would make the SuperStart tab page appear instead. This way I have to choose to close Firefox by actually clo
-
Issues in installing agent2.2
Hi, I am trying to install AGENT 2.2 in my HP UX 11.11 box for Sun one webserver 6.1. I am receiving the below error message when i start my install. Can you throw some light on this? Do you want to create it now or choose another directory? 1. Creat
-
I want to downgrade to ios 6!!!
i have an iphone 5 (its on t-mobile uk if that helps) and i am running ios 7.0.2 and i want to downgrade to ios 6.1.4 because i really don't like ios 7, is there anyway i could do this. p.s: i would also like to downgrade my ipad 2 if this is possibl