Database sizing and also storing in AD?

Hi All,
I'm just spec'ing out an MBAM 2.0 design for our Win 8.1 deployments and wanted to just check a few things.
1.  Are there any issues to storing the recovery key in AD (as we do currently with W7) as well as in the MBAM database?  Other threads seems to suggest that it's fine.
2. Database sizing.  I've seen figures of approx. 250mb per 10k clients.  Does that seem fair and is that for each database (ie. recovery & audit databases, therefore 500mb per 10k clients)?
3. Our SQL team suggested putting the SSRS onto the Admin & Monitoring server.  Is that a supported configuration or a big no-no due to security? Or does it use the SSRS of the CM server?
Also it might be worth noting that this will be integrated with CM12.  We have a CAS with a number of primaries, so I was presuming to install it on the CAS.
Thanks in advance.
Simon

1.) Not that I have seen. I have done this in the past without issue
2.) From what I have encountered that seems like a fair estamation. I believe if you have a more frequent or less frequent check back to the server, you may have a slightly bigger or smaller DB size
3.) I'm not sure if it's supported. Is there a reason you guys particularly want SSRS? I'm sure you could use SSRS to generate your own reports but the console allows you to generate some canned report and I don't beleive this uses SSRS to do that.
PLEASE MARK ANY ANSWERS TO HELP OTHERS Blog:
rorymon.com Twitter: @Rorymon

Similar Messages

  • Csv file to database tables and also foreginkey related columns directly

    i have created dimensions tables in ssis  and i need to load  the data into that tables from my given csv files. iand i have foriegien key columns of fact table for this also data need to load

    definitely we have primary key relations..  the tables contain primary keys and forien keys  i have created tables nearly 20 tables  in sql server some of them consist of dimensions and facts.. so i have an csv files of data. so that i need
    to load data in that tables by using ssis package
    i have an idea taking one data flow task in control flow task for  each and every single table i am taking one oldb destination. for each and every one i need source as csv file. by connecting this both we can load data
    but i need to load data into 20 tables by taking on dataflow task..how it is possible any solution and any different ways to load data from csv files to ssispacke tables

  • Lifecycle Workflow platform and database sizing guide?

    I've been asked to investigate the technical aspects of the Lifecycle Workflow and Reader Extension products.
    I've gone through the various adobe websites a great deal and found installation guides but nothing that provides platform selection and database sizing recommendations.
    We are considering both JBoss on Suse LInux and WebSphere on AIX as a platform.
    Is anyone aware of any recommendations as to the choice of a platform?
    Any experiences or recommendations out there from the user community?
    Also, is there any documentation that gives guidance to database sizing? Frankly, I'm not sure I understand yet whether it's just the routing/metadata that gets stored in the workflow database or the actual documents as well.
    Anyone point me at or provide me more technical documentation?
    Thanks
    Verlyn

    Hi Verlyn
    I can answer some of your questions, based on my understanding.
    The workflow engine does store the document itself as part of the workflow. In fact, it records the historical state of the document at each point that someone in the workflow worked on it. You can see this by looking at the "Participated" item in Form Manager, where you can see the historical value of each form that's been submitted. Any workflow attachments are also stored in the database.
    It gets a little more complicated. Depending on how you structure your workflow, you can choose to either store the entire PDF (in a document or binary variable) or just the data (in a form variable -in which case the data is merged with an xdp template whenever someone wants to view it).
    Document variables are also a little more intelligent: you can specify a size threshold. Below that threshold, the document is stored as a blob in the database - above the threshold, the document is stored in a folder on the file system. The default threshold value is 64K.
    I hope this helps...
    Howard Treisman
    http://www.avoka.com

  • PLEASE help!! I use Outlook for my email and the Mail App Icon in my dock was also storing all my emails. So, I tried to delete ONLY the emails in the Mail App but somehow (under Preferences maybe?) I also deleted all my emails in my Outlook inbox.

    PLEASE help!! I use Outlook for my email and the Mail App Icon that is in my dock, that I know nothing about, was also storing all my emails. So, I tried to delete ONLY the emails in the Mail App but somehow (under Preferences maybe?) I also deleted ALL my OUTLOOK emails in my inbox.  Can someone please please tell me if there is a way to get my Outlook inbox emails back OR EVEN the emails that were in my Mail app - because even tho I never use the Mail app, at least the emails would be there.
    When I was trying to delete the emails in my inbox for the Mail app - I followed THESE directions.  It did in fact clear my inbox for the Mail app.  But then I went to log on to my Outlook account and EVERY SINGLE EMAIL was gone.  And not in the Deleted box.  Just gone.  Here are the directions I followed that screwed everything up.  Please help.
    Top menu bar, Mail > Preferences > Accounts > Mailbox Behaviors.
    Uncheck "Store deleted messages on the server".
    At the drop list for "Permanently erase deleted messages when", choose "Quitting Mail".
    Next...
    Top menu bar, Mail > Preferences > General.
    At "When searching all mailboxes, include results from", uncheck "Trash".
    Select All = command A

    i found out my prob!
    here is what you do.
    go to the "system preferences" on your dock.
    click "software updates".
    click "installed software"
    if it shows something about a recent update about "EFI UPDATE, FIRMWARE, THUNDERBOLT" or anything like that, exit out of it.
    go to mail.
    click "mail" at the top.
    click "preferences...".
    find the account you are having trouble with, once you do, make sure its highlighted, then click the "-" at the bottom of the window (this will only effect that mail account, it will not effect your ical weather or not its synced thought that email account)
    hit the "+" (right next to the "-") and add your accout back!
    its something with that update that effected mail, i hope this works out for you, if not reply back

  • I lost my Iphone and I stored my photos on icloud, how to get those data back? My PC also not working.

    I lost my Iphone and I stored my photos on icloud, how to get those data back? My PC also not working.

    Welcome to the Apple Community Rahul.
    Photos in photo stream...
    You can log into your iCloud account on another computer or device and enable photo stream to see your photos. You should do this as soon as possible because photo stream in the cloud only keeps photos for 30 days and each day you delay, you will lose another days photos.
    Photos in the camera roll...
    If you kept a back up of your device you can use it to restore to a replacement device, this will restore all of the photos in your camera roll and isn't time limited.

  • CRM TPM Database Sizing for CRM and BW

    All,
    I am currently sizing for a TPM implementation and have a couple of questions concerning storage capacity for CRM and BW.  I have reviewed and created an Excel spreadsheet based on the SAP Sizing Guide for CRM-TPM but I am coming up short in a couple areas.
    Here is the document Link: [https://websmp105.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000711312004E]
    1.  Is there a storage sizing guide for BW or what has worked for the community to estimate?
    2.  Is the sizing guide for CRM/TPM correct (see below example)? 
    3.  What has worked for CRM/TPM database sizing from the community?
    I have a question about section 3.3.3 Disk Sizing in CRM.  If the disk sizing is based on per promotion (for the condition generation process), why is there a multiplication factor for PARTNERS?  I donu2019t believe we would have more than 1 or 2 partners per promotion.
    I did some quick math with some example numbers and came up with about 2.9TB for the CRM database.  See below for additional info based on the equation in section 3.3.3.
    Part 1
    20,000 Promotions
    10 Products                                         
    1000 Partners                                     
    .87TB                                                   
    Part 2
    10,000 Promotions
    47 Products
    1000 Partners
    2.04TB
    Is this accurate for sizing the condition generation process for the CRM database?  I am failing to understand why, for example, the 20,000 promotions would have 1,200 partners included in the base equation for each promotion.
    I appreciate any time you could spend in responding to my question.
    Thanks in advance,
    Steve
    Edited by: Steve Rocha on Jan 7, 2010 5:07 PM
    Edited by: Steve Rocha on Jan 7, 2010 5:09 PM
    Edited by: Steve Rocha on Jan 7, 2010 5:09 PM

    Thanks Steve for you reply.
    I am looking for a Sizing sheet from SAP TPM Perspective. If you could share your  Excel spreadsheet based on the SAP Sizing Guide for CRM-TPM .
    regards
    AK

  • APEX database sizing methods and spreadsheets

    APEX database sizing methods and spreadsheets

    Yes I asking how much space the APEX 3.2 framework requires as well as yes to how much space to allocate for your particular application that happens to be implemented in APEX. I have 1 word form that contains 10 fields that are filled in by users currenlty now. So far, I have 50 of these completed (same form) and would like to create an APEX application supported by a database that can initially contain this data in one table once migrated and be able to hold more of this data as the new online system is used. Therefore, are any sizing methods, for example, function points or excel macros, etc...that can be used to predict potential database sizes needed based on an increase in data volume.
    I ask this because currently APEX 3.2 uses Oracle Database Express Edition (XE). Oracle Database XE can address only 1GB of RAM. This limitation mainly affects how many users can access the database concurrently and how well it perform but APEX can run against a full 10g or 11g install of Database as well as XE, and you can upgrade rather nicely from XE to full DB if your needs demand

  • Problem using secondary database, sequence (and custom tuple binding)

    I get an exception when I try to open a Sequence to a database that has a custom tuple binding and a secondary database. I have a guess what the issue is (below), but it boils down to my custom tuple-binding being invoked when opening the sequence. Here is the exception:
    java.lang.IndexOutOfBoundsException
    at com.sleepycat.bind.tuple.TupleInput.readUnsignedInt(TupleInput.java:4
    14)
    at com.sleepycat.bind.tuple.TupleInput.readInt(TupleInput.java:233)
    at COM.shopsidekick.db.community.Shop_URLTupleBinding.entryToObject(Shop
    _URLTupleBinding.java:72)
    at com.sleepycat.bind.tuple.TupleBinding.entryToObject(TupleBinding.java
    :73)
    at COM.tagster.db.community.SecondaryURLKeyCreator.createSecondaryKey(Se
    condaryURLKeyCreator.java:38)
    at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.
    java:546)
    at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.ja
    va:42)
    at com.sleepycat.je.Database.notifyTriggers(Database.java:1343)
    at com.sleepycat.je.Cursor.putInternal(Cursor.java:770)
    at com.sleepycat.je.Cursor.putNoOverwrite(Cursor.java:352)
    at com.sleepycat.je.Sequence.<init>(Sequence.java:139)
    at com.sleepycat.je.Database.openSequence(Database.java:332)
    Here is my code:
    // URL ID DB
    DatabaseConfig urlDBConfig = new DatabaseConfig();
    urlDBConfig.setAllowCreate(true);
    urlDBConfig.setReadOnly(false);
    urlDBConfig.setTransactional(true);
    urlDBConfig.setSortedDuplicates(false); // No sorted duplicates (can't have them with a secondary DB)
    mURLDatabase = mDBEnv.openDatabase(txn, "URLDatabase", urlDBConfig);
    // Reverse URL lookup DB table
    SecondaryConfig secondaryURLDBConfig = new SecondaryConfig();
    secondaryURLDBConfig.setAllowCreate(true);
    secondaryURLDBConfig.setReadOnly(false);
    secondaryURLDBConfig.setTransactional(true);
    TupleBinding urlTupleBinding = DataHelper.instance().createURLTupleBinding();
    SecondaryURLKeyCreator secondaryURLKeyCreator = new SecondaryURLKeyCreator(urlTupleBinding);
    secondaryURLDBConfig.setKeyCreator(secondaryURLKeyCreator);
    mReverseLookpupURLDatabase = mDBEnv.openSecondaryDatabase(txn, "SecondaryURLDatabase", mURLDatabase, secondaryURLDBConfig);
    // Open the URL ID sequence
    SequenceConfig urlIDSequenceConfig = new SequenceConfig();
    urlIDSequenceConfig.setAllowCreate(true);
    urlIDSequenceConfig.setInitialValue(1);
    mURLSequence = mURLDatabase.openSequence(txn, new DatabaseEntry(URLID_SEQUENCE_NAME.getBytes("UTF-8")), urlIDSequenceConfig);
    My secondary key creator class looks like this:
    public class SecondaryURLKeyCreator implements SecondaryKeyCreator {
    // Member variables
    private TupleBinding mTupleBinding; // The tuple binding
    * Constructor.
    public SecondaryURLKeyCreator(TupleBinding iTupleBinding) {
    mTupleBinding = iTupleBinding;
    * Create the secondary key.
    public boolean createSecondaryKey(SecondaryDatabase iSecDB, DatabaseEntry iKeyEntry, DatabaseEntry iDataEntry, DatabaseEntry oResultEntry) {
    try {
    URLData urlData = (URLData)mTupleBinding.entryToObject(iDataEntry);
    String URL = urlData.getURL();
    oResultEntry.setData(URL.getBytes("UTF-8"));
    catch (IOException willNeverOccur) {
    // Success
    return(true);
    I think I understand what is going on, and I only noticed it now because I added more fields to my custom data (and tuple binding):
    com.sleepycat.je.Sequence.java line 139 (version 3.2.44) does this:
    status = cursor.putNoOverwrite(key, makeData());
    makeData creates a byte array of size MAX_DATA_SIZE (50 bytes) -- which has nothing to do with my custom data.
    The trigger causes an call to SecondaryDatable.updateSecondary(...) to the secondary DB.
    updateSecondary calls createSecondaryKey in my SecondaryKeyCreator, which calls entityToObject() in my tuple-binding, which calls TupleInput.readString(), etc to match my custom data. Since what is being read goes for more than the byte array of size 50, I get the exception.
    I didn't notice before because my custom tuple binding used to read fewer that 50 bytes.
    I think the problem is that my tuple binding is being invoked at all at this point -- opening a sequence -- since there is no data on which it can act.

    Hi,
    It looks like you're making a common mistake with sequences which is to store the sequence itself in a database that is also used for application data. The sequence should normally be stored in separate database to prevent configuration conflicts and actual data conflicts between the sequence record and the application records.
    I suggest that you create another database whose only purpose is to hold the sequence record. This database will contain only a single record -- the sequence. If you have more than one sequence, storing all sequences in the same database makes sense and is safe.
    The database used for storing sequences should not normally have any associated secondary databases and should not be configured for duplicates.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • So, iDisk (great with MobileMe) is closed! Ok. I got the most expensive iCloud Plan and LOVED storing files online. Is that REALLY OVER??!! (besides Numbers', Pages', and Keynote's docs sync).. Will I REALLY have to downgrade to "free" on iCloud?!!

    So, iDisk (great with MobileMe) is closed! Ok. I got the most expensive iCloud Plan and LOVED storing files online. Is that REALLY OVER??!! (besides Numbers', Pages', and Keynote's docs sync).. Will I REALLY have to downgrade to "free" on iCloud and PAY for "DropBox", "Google Drive" or "Microsft's Sky Whatever.."?!!
    I could not log in to this Support Community with my UPDATED Apple ID (and this is another issue I can't believe Apple just says "NO, you CAN'T!" - deleting other (old) Apple IDs we've created by mistake or a long time ago and being able to have it all on ONE SINGLE UPDATED Apple ID - ALSO saving Apple's storage servers, etc.).
    Anyway, my Apple ID for better communication is [email protected] (but I really couldn't log in here - it kept saying that the e-mail is already registered, etc. I know it, I'm trying to use it here, that's all...
    So, really. Of course a lot of people don't care about the iDisk issue. That's fine, we all love and hate specific apps or software.. But why has Apple just shut it down, since it was so EASY to use (as DropBox) on all our Apple devices?!! So, we really can't upload/store any kind of files anymore and that's it?!
    I used iDisk when MobileMe was still alive, and it was great to have my files from work, home, any backup (whatever we want) uploaded and synced and used in our Macs, iPhones and iPads. If we have the OPTION of an iCloud Plan (the top one, for example), but we just don't like the automatick backups for the iPhones and iPads, nor the Pictures' sync function.. Why can't we just have the SAME simple and efficient/great service of the iDisk installed in each device?!
    Do we really have to go back and downgrade back to the free iCloud Option (just for keeping the e-mail) and paying extra for DropBox, Google, Microsoft, any other services?! I have nothing against them - I actually have DropBox and it works great because it REMINDS me of iDisk!! But WHY NOT just keep it simple and closed with 1 company/platform/option - Apple's iOS, OSX, etc.
    That's why I decided to have a Macbook Pro, an iPhone 4S and an iPad 3rd. Generation (for example, along with other products and services such as Apple TV).. I know I'm CHOOSING to stay with 1 world, that's fine. And that's why I didn't (yet) go for a Samsung phone or another tablet.. I wanna stick with 1 pattern and I have chosen Apple's..
    Just can't understand such an innovative company make such decisions (such as shutting down iDisk or not letting us just DELETE Apple IDs we do NOT use anymore).. Please, just some guidance or confirmation if that's correct: No way of uploading files others than Numbers', Pages' or Keynote's!!??
    Thank you very much!!
    Peace.

    Thank you Roger.
    That's exactly what I was afraid of...
    During all the time I have been a loyal customer of Apple's products and services, these have been the only 2 issues that I really can't understand (specially when we look at Apple's strategies and management)...
    1) Not being able to DELETE (simply delete or deactivate) old or duplicated Apple IDs... I have 3 different Apple IDs but only one is the real Apple ID I use for everything, with all my purchases and services (such as e-mail) related to it (Why won't Apple jus let people delet, deactivate or - at least - merge Apple IDs?! That would help everyone, including Apple when we think about costs, storage, database maintenance, etc.) - Actually, even when I gave up on trying to delete other Apple IDs, I tried to have the one I really use and have organized to use my @me.com address as the Apple ID itself (as Apple asks us to do - and makes sense, of course). But never worked. Just because I created that Apple ID with a @hotmail.com account (since it asks for a valid e-mail when you create an Apple ID, it NEVER lets me now change my Apple ID to my @me.com e-mail address - I mean JUST having my @me.com e-mail address as my primary e-mail and my Apple ID (for example, having that old @hotmail.com e-mail address out of it, cleaned..)!
    2) The other issue was this one, about iDisk. Of course there must be reasons for having it shut down, maybe the apps used by iPhone and iPad, along with the folder on Mac OSX, is way too complicated, expensive, full of bugs, I don't know. But if we decide to pay for 50GB of online storage, why even THINK about different options for files we use?! I have Dropbox, I've read about Google Drive and Microsoft SkyDrive, etc. But I just wanted to STICK with a single ALL-APPLE solution.. That is.. Pay the annual fee for the 50GB and just still keep any file I want on the cloud with iDisk. So, what Apple tells me is that if I have other files than iWork or PDF (and some few exceptions), I should actually downgrade to "free" iCloud and decide to PAY for another solution (organization), such as DropBox, Google Disk or Drive (I don't really know), Micrsoft's SkyDrive..
    Of course there's the strategy of getting more and more people to use Pages, Numbers and Keynote (buying them for iOS and Mac OSX, as I already have and love them). But "killing" iDisk and just telling users to look for other storage solutions doesn't really sound like Apple.
    I have seen so many questions here about both issues (deleting or merging Apple IDS + iDisk back since we pay for storage) that I really believe Apple should reconsider these issues or, at least, open the issue and let us know what would be the best "partner solution" for iCloud...
    Thanks a lot!
    All the best...
    Eduardo Rocha.
    [email protected]

  • Logical Database design and physical database implementation

    Hi
    I am an ORACLE DBA basically and we started a proactive server dashboard portal ,which basically reports all aspects of our infrastructure (Dev,QA and Prod,performance,capacity,number of servers,No of CPU,decomissioned date,OS level,Database patch level) etc..
    This has to be done entirely by our DBA team as this is not externally funded project.Now i was asked to do " Logical Database design and physical Database
    implementation"
    Even though i know roughly what's that mean(like designing whole set of tables in star schema format) ,i have never done this before.
    In my mind i have a rough set of tables that can be used but again i think there is lot of engineering involved in this area to make sure that we do it properly.
    I am wondering you guys might be having some recommendations for me in the sense where to start?are there any documents online , are there any book on this topic?Are there any documents which explain this phenomena with examples ?
    Also exactly what is the difference between logical database design vs physical database implementation
    Thanks and Regards

    Logical database design is the process of taking a business or conceptual data model (often described in the form of an Entity-Relationship Diagram) and transforming that into a logical representation of that model using the specific semantics of the database management system. In the case of an RDBMS such as Oracle, this representation would be in the form of definitions of relational tables, primary, unique and foreign key constraints and the appropriate column data types supported by the RDBMS.
    Physical database implementation is the process of taking the logical database design and translating that into the actual DDL statements supported by the target RDBMS that will create the database objects in a target RDBMS database. This will generally include specific physical implementation details such as the specification of tablespaces, use of specialised indexing (bitmap, clustered etc), partitioning, compression and anything else that relates to how data will actually be physically stored inside the database.
    It sounds like you already have a physical implementation? If so, you can reverse engineer this implementation into a design tool such as SQL Developer Data Modeller. This will create a logical design by examining the contents of the Oracle data dictionary. Even if you don't have an existing database, Data Modeller is a good tool to use as a starting point for logical and even conceptual/business models.
    If you want to read anything about logical design, "An Introduction to Database Systems" by Date is always a good starting point. "Database Systems - A Practical Approach to Design, Implementation and Management" by Connolly & Begg is also an excellent reference.

  • EAV modelling approach on database tables and how to use it in Forms Dvl

    Hello,
    I am dealing with two tables in data modell which are structured on EAV ( Entity-Attribute-Value ) modelling approach.
    Survey( Survey# number not null,
    Start_Date_Of_Survey date not null ,
    End_Date_Of_Survey date null,
    Sts_Of_Survey varchar2( 4 ) not null ,
    Contractor_Partner# number not null, -- reference foreign key, to Partner's Table
    Survey_Report CLOB ) ;
    Survey# primary key of Survey.
    Meta_Survey_Parameter_Register( Parameter# varchar2( 40 ) not null,
    Additional_Parameter_Desc varchar2( 240 ) null,
    Type_Of_Value varchar2( 60 ) not null,
    Data_Type varchar2( 10 ) not null,
    Unit varchar2( 10 ) null,
    High_Value varchar2( 20 ) null,
    Low_Value varchar2( 20 ) null );
    Type_Of_Value Is restricted to : ATOMIC_VALUE, INTERVAL_VALUE_FROM_TO
    Data_Type is restricted to: Date, time, number( X,Y ), varchar2,
    Unit is varchar2 value not resstricted: for example: %, kg as kilo, m as meter, m/s as meter per second, null
    This table is meta table so it suppose to be used as some register - static part of the schema.
    Then I have table which attaches Parameters to the Survey as the association table
    Survey_Parameters( Survey#, Parameter#, Value, Comment );
    Primary key( Survey#, Parameter# );
    I need to implement this in Forms and Reports.
    My question is how to implement it to maximally make good use of Oracle forms 10g or 11g features and Oracle Reports 10g:
    a) when fulfilling the survey and use of parsing rules supplied in Meta_Survey_Parameter_Register
    b) at the producing as the survey report with reports
    c) in data analyisis in data warehouse impementations.

    Hello Craig,
    Thank you Craig for your answer.
    I must confess that I am also not familiar with EAV theoretical background and concept. Till recently I even did not know that this kind of problem could be placed
    in theoretical background called EAV. This put smile nad releive on my mind. Because since this moment I do not feel this kind of problem as some exception or
    some exceptional tools or data design ( object modelling and object databases ) suppose to be used. This way I can still counted on concepts in RDBMS databases
    and applications. Well I guess so.
    Let me comment and answer on your text between your lines.
    I am only conceptually familiar with EAV modeling, so I don't claim to be an expert! ;-) Given the abstract way in which data is stored with EAV, I would suggest that you first create a SQL Query that will produce the desired record set.
    Ok. But I guess you have in mind to transform vertical set of values to horizontal ( pivoting, transposing )?
    View has cast operator that transform data type of each parameter from general ( e.g. varchar2 ) to the one that is registered in the
    Meta_Parameter_Register. This way it is provided to get the survey with the list of assigned parameters in horizontal shape?
    Once you have this, you can either use the Query as the source of a Database View and then base your Forms data block on this view or you can use the query in a Procedure and base your Forms data block on the procedure or lastely, you could use the query directly and base your Forms data block on the query using the "From Clause" option.
    Ok, I think I understand it.
    So When I have horizontal list there is challenge during
    - insert
    - delete
    - update
    operations how to handle these operations to transform to vertical operation.
    This could only be done with Forms data block based on procedure and instead of triggers?
    The simplest option would be to create a DB view and base your block on the view. You will not be able to update the data however. If you need to be able to change or add new data, I would suggest you use a Procedure based block. Take a look at these links for additional information on using Procedure and From Clause based blocks.
    Ok Craig, sounds like it makes sense.
    IT seems your suggestion is caling for some simpler example with the same characteristic of the problem.
    Like two tables:
    Master( Master#, ..... )
    Detail( Master#, Detail#, value_low_or_just_value, value_high, data_type )
    Forms: How to base a block on a FROM clause Query
    Forms: How to base a data block on a Procedure
    Ok,
    Hope this helps,
    I see your point and it makes sense , it is only the challenge to see how much of flexibility I can reach and how cheap workarounds are to preserve
    the level of flexibility in the tools that are not naturally supporting EAV ( like forms and reports ).
    If someone's response is helpful or correct, please mark it accordingly.
    Sure...
    We are on the line.
    I am sure there is someone else in the community that experienced this kind of challenge.

  • New User Database schema and table name

    When i create a new user in Oracle Webcenter Spaces 11g, I am not able to get the name of the database schema and table, where it is stored. Any insight on this will be very helpful.

    WebCenter (and WebCenter spaces) uses an 'identity store' instead of database schema for storing user information - in an 'out of the box' installation, users are maintained through an embedded WebLogic LDAP store.
    See the Oracle Fusion Middleware Administrator's Guide for Oracle WebCenter, p. 34-2.
    At the end of section 34.4.1:
    WebCenter Spaces supports self-registration. When new WebCenter users
    self-register, they create their own login and password and a new user account is
    created in the identity store. See also, Section 34.4, "Allowing Self-Registration".
    user9097357 wrote:
    When i create a new user in Oracle Webcenter Spaces 11g, I am not able to get the name of the database schema and table, where it is stored. Any insight on this will be very helpful.

  • Secondary database performance and CacheMode

    This is somewhat a follow-on thread related to: Lock/isolation with secondary databases
    In the same environment, I'm noticing fairly low-performance numbers on my queries, which are essentially a series of key range-scans on my secondary index.
    Example output:
    08:07:37.803 BDB - Retrieved 177 entries out of index (177 ranges, 177 iters, 1.000 iters/range) in: 87ms
    08:07:38.835 BDB - Retrieved 855 entries out of index (885 ranges, 857 iters, 0.968 iters/range) in: 346ms
    08:07:40.838 BDB - Retrieved 281 entries out of index (283 ranges, 282 iters, 0.996 iters/range) in: 101ms
    08:07:41.944 BDB - Retrieved 418 entries out of index (439 ranges, 419 iters, 0.954 iters/range) in: 160ms
    08:07:44.285 BDB - Retrieved 2807 entries out of index (2939 ranges, 2816 iters, 0.958 iters/range) in: 1033ms
    08:07:50.422 BDB - Retrieved 253 entries out of index (266 ranges, 262 iters, 0.985 iters/range) in: 117ms
    08:07:52.095 BDB - Retrieved 2838 entries out of index (3021 ranges, 2852 iters, 0.944 iters/range) in: 835ms
    08:07:58.253 BDB - Retrieved 598 entries out of index (644 ranges, 598 iters, 0.929 iters/range) in: 193ms
    08:07:59.912 BDB - Retrieved 143 entries out of index (156 ranges, 145 iters, 0.929 iters/range) in: 32ms
    08:08:00.788 BDB - Retrieved 913 entries out of index (954 ranges, 919 iters, 0.963 iters/range) in: 326ms
    08:08:03.087 BDB - Retrieved 325 entries out of index (332 ranges, 326 iters, 0.982 iters/range) in: 103ms
    To explain those numbers, a "range" corresponds to a sortedMap.subMap() call (ie: a range scan between a start/end key) and iters is the number of iterations over the subMap results to find the entry we were after (implementation detail).
    In most cases, the iters/range is close to 1, which means that only 1 key is traversed per subMap() call - so, in essence, 500 entries means 500 ostensibly random range-scans, taking only the first item out of each rangescan.
    However, it seems kind of slow - 2816 entries is taking 1033ms, which means we're really seeing a key/query rate of ~2700 keys/sec.
    Here's performance profile output of this process happening (via jvisualvm): https://img.skitch.com/20120718-rbrbgu13b5x5atxegfdes8wwdx.jpg
    Here's stats output after it running for a few minutes:
    I/O: Log file opens, fsyncs, reads, writes, cache misses.
    bufferBytes=3,145,728
    endOfLog=0x143b/0xd5b1a4
    nBytesReadFromWriteQueue=0
    nBytesWrittenFromWriteQueue=0
    nCacheMiss=1,954,580
    nFSyncRequests=11
    nFSyncTime=12,055
    nFSyncTimeouts=0
    nFSyncs=11
    nFileOpens=602,386
    nLogBuffers=3
    nLogFSyncs=96
    nNotResident=1,954,650
    nOpenFiles=100
    nRandomReadBytes=6,946,009,825
    nRandomReads=2,577,442
    nRandomWriteBytes=1,846,577,783
    nRandomWrites=1,961
    nReadsFromWriteQueue=0
    nRepeatFaultReads=317,585
    nSequentialReadBytes=2,361,120,318
    nSequentialReads=653,138
    nSequentialWriteBytes=262,075,923
    nSequentialWrites=257
    nTempBufferWrites=0
    nWriteQueueOverflow=0
    nWriteQueueOverflowFailures=0
    nWritesFromWriteQueue=0
    Cache: Current size, allocations, and eviction activity.
    adminBytes=248,252
    avgBatchCACHEMODE=0
    avgBatchCRITICAL=0
    avgBatchDAEMON=0
    avgBatchEVICTORTHREAD=0
    avgBatchMANUAL=0
    cacheTotalBytes=2,234,217,972
    dataBytes=2,230,823,768
    lockBytes=224
    nBINsEvictedCACHEMODE=0
    nBINsEvictedCRITICAL=0
    nBINsEvictedDAEMON=0
    nBINsEvictedEVICTORTHREAD=0
    nBINsEvictedMANUAL=0
    nBINsFetch=7,104,094
    nBINsFetchMiss=575,490
    nBINsStripped=0
    nBatchesCACHEMODE=0
    nBatchesCRITICAL=0
    nBatchesDAEMON=0
    nBatchesEVICTORTHREAD=0
    nBatchesMANUAL=0
    nCachedBINs=575,857
    nCachedUpperINs=8,018
    nEvictPasses=0
    nINCompactKey=268,311
    nINNoTarget=107,602
    nINSparseTarget=468,257
    nLNsFetch=1,771,930
    nLNsFetchMiss=914,516
    nNodesEvicted=0
    nNodesScanned=0
    nNodesSelected=0
    nRootNodesEvicted=0
    nThreadUnavailable=0
    nUpperINsEvictedCACHEMODE=0
    nUpperINsEvictedCRITICAL=0
    nUpperINsEvictedDAEMON=0
    nUpperINsEvictedEVICTORTHREAD=0
    nUpperINsEvictedMANUAL=0
    nUpperINsFetch=11,797,499
    nUpperINsFetchMiss=8,280
    requiredEvictBytes=0
    sharedCacheTotalBytes=0
    Cleaning: Frequency and extent of log file cleaning activity.
    cleanerBackLog=0
    correctedAvgLNSize=87.11789
    estimatedAvgLNSize=82.74727
    fileDeletionBacklog=0
    nBINDeltasCleaned=2,393,935
    nBINDeltasDead=239,276
    nBINDeltasMigrated=2,154,659
    nBINDeltasObsolete=35,516,504
    nCleanerDeletions=96
    nCleanerEntriesRead=9,257,406
    nCleanerProbeRuns=0
    nCleanerRuns=96
    nClusterLNsProcessed=0
    nINsCleaned=299,195
    nINsDead=2,651
    nINsMigrated=296,544
    nINsObsolete=247,703
    nLNQueueHits=2,683,648
    nLNsCleaned=5,856,844
    nLNsDead=88,852
    nLNsLocked=29
    nLNsMarked=5,767,969
    nLNsMigrated=23
    nLNsObsolete=641,166
    nMarkLNsProcessed=0
    nPendingLNsLocked=1,386
    nPendingLNsProcessed=1,415
    nRepeatIteratorReads=0
    nToBeCleanedLNsProcessed=0
    totalLogSize=10,088,795,476
    Node Compression: Removal and compression of internal btree nodes.
    cursorsBins=0
    dbClosedBins=0
    inCompQueueSize=0
    nonEmptyBins=0
    processedBins=22
    splitBins=0
    Checkpoints: Frequency and extent of checkpointing activity.
    lastCheckpointEnd=0x143b/0xaf23b3
    lastCheckpointId=850
    lastCheckpointStart=0x143a/0xf604ef
    nCheckpoints=11
    nDeltaINFlush=1,718,813
    nFullBINFlush=398,326
    nFullINFlush=483,103
    Environment: General environment wide statistics.
    btreeRelatchesRequired=205,758
    Locks: Locks held by data operations, latching contention on lock table.
    nLatchAcquireNoWaitUnsuccessful=0
    nLatchAcquiresNoWaitSuccessful=0
    nLatchAcquiresNoWaiters=0
    nLatchAcquiresSelfOwned=0
    nLatchAcquiresWithContention=0
    nLatchReleases=0
    nOwners=2
    nReadLocks=2
    nRequests=10,571,692
    nTotalLocks=2
    nWaiters=0
    nWaits=0
    nWriteLocks=0
    My database(s) are sizeable, but on an SSD in a machine with more RAM than DB size (16GB vs 10GB). I have CacheMode.EVICT_LN turned on, however, am thinking this may be harmful. I have tried turning it on, but it doesn't seem to make a dramatic difference.
    Really, I only want the secondary DB cached (as this is where all the read-queries happen), however, I'm not sure if it's (meaningfully) possible to only cache a secondary DB, as presumably it needs to look up the primary DB's leaf-nodes to return data anyway.
    Additionally, the updates to the DB(s) tend to be fairly large - ie: potentially modifying ~500,000 entries at a time (which is about 2.3% of the DB), which I'm worried tends to blow the secondary DB cache (tho don't know how to prove one way or another).
    I understand different CacheModes can be set on separate databases (and even at a cursor level), however, it's somewhat opaque as to how this works in practice.
    I've tried to run DbCacheSize, but a combination of variable length keys combined with key-prefixing being enabled makes it almost impossible to get meaningful numbers out of it (or at the very least, rather confusing :)
    So, my questions are:
    - Is this actually slow in the first place (ie: 2700 random keys/sec)?
    - Can I speed this up with caching? (I've failed so far)
    - Is it possible (or useful) to cache a secondary DB in preference to the primary?
    - Would switching from using a StoredSortedMap to raw (possibly reusable) cursors give me a significant advantage?
    Thanks so much in advance,
    fb.

    nBINsFetchMiss=575,490The first step in tuning the JE cache, as related to performance, is to ensure that nBINsFetchMiss goes to zero. That tells you that you've sized your cache large enough to hold all internal nodes (I know you have lots of memory, but we need to prove that by looking at the stats).
    If all your internal nodes are in cache, that means your entire secondary DB is in cache, because you've configured duplicates (right?). A dup DB does not keep its LNs in cache, so it consists of nothing but internal nodes in cache.
    If you're using EVICT_LN (please do!), you also want to make sure that nEvictPasses=0, and I see that it is.
    Here are some random hints:
    + In general always use getStats(new StatsConfig().setClear(true)). If you don't clear the stats every time interval, then they are cumulative and it's almost impossible to correlate them to what's going on in that time interval.
    + If you're starting with a non-empty env, first load the entire data set and clear the stats, so the fetches for populating the cache don't show up in subsequent stats.
    + If you're having trouble using DbCacheSize, you may want to find out experimentally how much cache is needed to hold the internal nodes, for a given data set in your app. You can do this simply by reading your data set into cache. When nEvictPasses becomes non-zero, the cache has overflowed. This is going to be much more accurate than DbCacheSize anyway.
    + When you measure performance, you need to collect the JE stats (as you have) plus all app performance info (txn rate, etc) for the same time interval. They need to be correlated. The full set of JE environment settings, database settings, and JVM params is also needed.
    On the question of using StoredSortedMap.subMap vs a Cursor directly, there may be an optimization you can make, if your LNs are not in cache, and they're not if you're using EVICT_LN, or if you're not using EVICT_LN but not all LNs fit. However, I think you can make the same optimization using StoredSortedMap.
    Namely when using a key range (whatever API is used), it is necessary to read one key past the range you want, because that's the only way to find out whether there are more keys in the range. If you use subMap or the Cursor API in the most obvious way, this will not only have to find the next key outside the range but will also fetch its LN. I'm guessing this is part of the reason you're seeing a lower operation rate than you might expect. (However, note that you're actually getting double the rate you mention from a JE perspective, because each secondary read is actually two JE reads, counting the secondary DB and primary DB.)
    Before I write a bunch more about how to do that optimization, I think it's worth confirming that the extra LN is being fetched. If you do the measurements as I described, and you're using EVICT_LN, you should be able to get the ratio of LNs fetched (nLNsFetchMiss) to the number of range lookups. So if there is only one key in the range, and I'm right about reading one key beyond it, you'll see double LNs fetched as number of operations.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How to know database name and schema name

    Hi ,
    Once after logging in the database,how is it possible to know the current database and schema which we r using?Is there any system table where we can get the database names and schema names?
    please help me out.itz urgent.
    Regards,
    Sravan

    Probably not.
    If the database name is the name of the current database, it would be essentially redundant. If the database name is the name of some other database, in order to get the names of all the tables in the specified schema, you could create a database link to the remote system (which assumes you have a login and password to the remote database with appropriate privileges, that the database server's tnsnames.ora file has an entry for the remote database, etc) and query the remote data dictionary tables. Even if you could do that, however, you could not dynamically create triggers on the remote database since DDL over a database link.
    In theory, you could also load an appropriate JDBC driver into the database and write a Java stored procedure that would connect to the remote database (again, with an appropriate user name & password, host name, and port number) and issue DDL against that remote database. I have a hard time believing, however, that this would be a particularly beneficial approach. It would be easier just to put the appropriate code into each database that needs triggers generated or to have a separate Java application that generates triggers for a number of different databases.
    Justin

  • Need to database sizing

    Hi ,
    Can any one help me to know, how to do database sizing for oracle database.
    If database sizing is differed from version than i want to know for 9i,10g and 11.5.10.2 application.
    I dont have basic knowledge for doing database sizing. please tell me know because i have assigned a project on this in my company?
    Thanks a lot in advance

    The version of the database is irrelevant. Sizing is solely a function of the application.
    In your case, since you have one or more applications in the Oracle eBusiness Suite, there is almost certainly a sizing spreadsheet floating around. If you post this question over in the forum for the particular application(s) you are using, someone over there may be able to give you some pointers. I would also strongly suspect that a Metalink search that included the particular application(s) you're using would be beneficial.
    Justin

Maybe you are looking for

  • Photo gallery buttons don't work on new Chrome, Safari

    Hi I just upgraded my Mac and the Photo Gallery module's Next, Previous and Close buttons do not work. see sample. http://stsstone.businesscatalyst.com/gallery-by-applications works fine on Firefox. Any clue? Thanks Micha

  • Illustrator - problem with font Windsor

    Hello, i work on my old PC and now i have brand new PC with clean install Win 7, 64 bit + Adobe Suite CS6 + updates. On old PC i open my graphic with font Windsor with no problem. On new PC in Illustrator i open graphic and program give me info "miss

  • SPM & BPC on same server machine

    Hi, Can we have SPM and BPC installed on the same server machine. Are there any limitations of this kind of an approach? If yes, are there work-arounds for accomplishing this. Looking for information on this. Thanks in advance. Regards, Ashish Sharma

  • PSE 9 Editor crashes at start up

    Hoping someone can help! I've recieved the PSE9 brand new out of the box, and installed with no problems. I am able to open the organizer, but the editor crashes everytime I try to open it (from the start up menu as well as the organizer). I recieve

  • DPS Analytics : Data to understand

    Hi, What's the difference between "Visits" and "Content Views" in analyzing DPS dashboard. For the same page we have seen 1000 visits and 3000 content views. Please let us know. Thanks