Determine indexes by rule of thumb - no data or table structures

Hi,
Im doing some research (would really like to see peoples answers)
could some one give me an idea of the best way to index a tables the tables that appear in the statement below, to optimise performance.
select p.fname, p.sname ,p.personid,av.availid, av.adate, nwa.hospitalid,
nvl(to_char(av.astart,'HH24:MI'),'Not Specified') as ActualStart, nvl(to_char(av.aend,'HH24:MI'),'Not Specified') as ActualEnd, av.anyearly, av.anymiddle, av.anylate, av.anynight
from tblperson p
left outer join tblavailability av on p.personid = av.personid
left outer join tblnurseworkarea nwa on p.personid = nwa.personid
order by 1, 2;av.anyearly, av.anymiddle, av.anylate, av.anynight are all boolean fields 1/0
Please, I need someone to tell me how they would index the tables used here, from instinct..... and rule of thumb....
Much appriciated

what about if the query were like so:
select p.fname, p.sname ,p.personid,av.availid, av.adate, nwa.hospitalid,
nvl(to_char(av.astart,'HH24:MI'),'Not Specified') as ActualStart, nvl(to_char(av.aend,'HH24:MI'),'Not Specified') as ActualEnd, av.anyearly, av.anymiddle, av.anylate, av.anynight
from tblperson p
left outer join tblavailability av on p.personid = av.personid
left outer join tblnurseworkarea nwa on p.personid = nwa.personid
WHERE av.availid = 3
order by 1, 2;any difference in what you would do?

Similar Messages

  • How to write export dump commad with no datable data only table structure.

    How to write export dump commad with no datable data only table structure will there and command for hole schma.
    e.g. export dump command for scott schema and all table within scott schema in it no table data should be exported.

    If I understand the question, it sounds like you just need to add the flag "ROWS=N" to your export command (I assume that you're talking about the old export utility, not the Data Pump version).
    Justin

  • Function based indexes info,pls let me know data dictionary table name?

    Hi,
    pls let me know,Where can we find Function based index information ,that is data dictionary table name,.
    that is information like function used, column name.
    Thanks,
    KUmar.

    all_ind_expressions

  • How to export and import only data not table structure

    Hi Guys,
    I am not much aware about import ,export utility please help me ..
    I have two schema .. Schema1, Schema2
    i used to use Schema1 in that my valuable data is present . now i want to move this data from Schema1 to Schema2 ..
    In schema2 , i have only table structure , not any data ..

    user1118517 wrote:
    Hi Guys,
    I am not much aware about import ,export utility please help me ..
    I have two schema .. Schema1, Schema2
    i used to use Schema1 in that my valuable data is present . now i want to move this data from Schema1 to Schema2 ..
    In schema2 , i have only table structure , not any data ..Nothing wrong with exporting the structure. Just use 'ignore=y' on the import. When it tries to do the CREATE TABLE it (the CREATE statement) will fail because the table already exists, but the ignore=y means "ignore failures of CREATE", and it will then proceed to INSERT the data.

  • Data storage & Table structure in BW/BI

    Hi Experts,
                    I know that when we create an infocube and load data it gets stored in BI server, here my question is where does the data exactly get stored in depth in the server and i would like to know the table structure of the data that is stored in BW/BI.
    Thanks in advance
    Shiva

    Hi,
    You have tables to Master Data:
    Eg: 0MATERIAL
    /BI0/HMATERIAL                 Hierarchy: InfoObject Material
    /BI0/IMATERIAL                 SID Structure of Hierarchies: InfoObject M
    /BI0/KMATERIAL                 Conversion of Hierarchy Nodes - SID: InfoO
    /BI0/MMATERIAL                 View of Master Data Tables: Characteristic
    /BI0/PMATERIAL                 Master Data (Time-Ind.): Characteristic Ma
    /BI0/RMATERIAL                 View SIDs and Char. Values: Characteristic
    /BI0/SMATERIAL                 Master Data IDs: InfoObject Material
    /BI0/TMATERIAL                 Texts: Char. Material
    /BI0/XMATERIAL                 Attribute SID Table: InfoObject Material
    /BI0/ZMATERIAL                 View Hierarchy SIDs and Nodes: Char. Mater
    For Cubes
    you just goto SE11 and give star 0ic_c03 star and F4 then you can see the all tables for that cube like that you can check.
    Thanks
    Reddy
    Edited by: Surendra Reddy on Jan 21, 2010 5:40 AM

  • Oracle Data Pump - Table Structure change

    Hi,
    we have daily partitioned table, and for backup we are using data pump (expdp). we policy to drop partition after backup (archiving).
    we have archived dump files for 1year, few days back developer made changes with table structure they added one new column to table.
    Now we are unable to restore old partitions is there a way to restore partition if new column added / dropped from currect table.
    Thanks
    Sachin

    If a new column has been added to the table, you can import only the the data from the old structure to the new structure. Use the parameter CONTENT=DATA_ONLY.

  • Strategy in Data Warehouse Table Structure

    I'm building a relational data warehouse, and there are two approaches that seem almost interchangeable to me, despite being quite different from each other. 
    The first approach is rather simple.  I have a "User" table with a bunch of foreign keys, and then I have a bunch of other tables containing user attributes.  One table for "department," another for "payroll type,"
    another for "primary location," and so on for 20 different user attributes.
    The second approach, instead of using 20+ tables, combines this down into far fewer.  I would have an "Attribute Type" table and "Attribute" table.  These two, in conjunction with a bridge table, could accommodate as many
    attributes as necessary within three tables.  If the business wants to track a new "user-related" attribute, I don't need any new tables.  I would simply add the new attribute into the "Attribute Type" table as, say, "attribute
    21," and begin tracking it.  All the work could be done without ever adding new tables or columns.
    Both approaches seem to maintain (at least) 3NF.  Is one approach better in certain circumstances, and the other approach more appropriate at other times?  Any insight is appreciated!
    BrainE

    Hi Brian,
    The second approach with three tables is not really good here. Query Optimizer in SQL Server has a few enhancements for Star/Snowflake schemas in DW environment and 3-table schema would not be able to benefit from them. It would be also harder to maintain,
    load data and query. Finally, your attributes could have different data types, which you need to store. 
    I would suggest to go with first solution (multiple dimensions table) and follow a few extra rules:
    Avoid nullable attributes
    Choose attribute data types as narrow as possible
    Avoid string attributes. If needed create separate dimension tables for them
    Use columnstore indexes and
    Upgrade to SQL Server 2014 if it is all possible - there are multiple enhancementsin batch-mode processing there
    Thank you!
    Dmitri V. Korotkevitch (MVP, MCM, MCPD)
    My blog: http://aboutsqlserver.com

  • Alignment normal data like table structure

    Hi,
    Actually requirement is need to align table format. I used iterator for pagination,in that not able to adding the columns .please find the below sample code and required structure like table format. please help me regarding on this. Thanks in advance.
    <af:iterator id="i1" var="row" value="#{bindings.CatPartiesS1.collectionModel}"
    binding="#{backingBeanScope.ListBean.emplsTableIterator}"
    rows="3">
    <af:spacer width="5" height="10" id="s3416"/>
    <af:panelGroupLayout id="pgl439" layout="horizontal">
    <af:spacer width="5" height="10" id="s39"/>
    <af:panelGroupLayout id="pgl120" inlineStyle="width:120px;" layout="horizontal">
    <af:selectBooleanCheckbox text="" id="sbc2431"
    inlineStyle="font-weight:bold;"/>
    <af:panelGroupLayout id="pgl52" layout="horizontal" inlineStyle="width:120px;">
    <af:outputText value="#{row.PurchasingCategory}" id="ot8"/>
    </af:panelGroupLayout>
    </af:panelGroupLayout>
    <af:panelGroupLayout id="pglert120" inlineStyle="width:120px;" layout="horizontal">
    <af:spacer width="5" height="10" id="s32912"/>
    <af:panelGroupLayout id="pgl66wq49" layout="horizontal" inlineStyle="width:120px;">
    <af:outputText value="#{row.MatlQty}" id="ot10"/>
    </af:panelGroupLayout>
    </af:panelGroupLayout>
    <af:spacer width="5" height="10" id="s51"/>
    <af:panelGroupLayout id="pgl634" layout="horizontal" inlineStyle="width:120px;">
    <af:outputText value="#{row.MatlCost5}" id="ot331"/>
    </af:panelGroupLayout>
    <af:spacer width="5" height="10" id="s544t1"/>
    <af:panelGroupLayout id="pgl1d3s20"
    inlineStyle="width:120px;"
    layout="horizontal">
    <af:outputText value="Current Selection:"
    partialTriggers="::i1" id="ot2"/>
    <af:outputText value="#{bindings.DataCode3.inputValue}"
    id="ot17" partialTriggers="::i1"/>
    </af:panelGroupLayout>
    <af:spacer width="5" height="10" id="s5123"/>
    <af:panelGroupLayout id="pgl12de0" inlineStyle="width:120px;" layout="horizontal">
    <af:outputText value="#{row.CatMatlCost}" id="ot41"/>
    </af:panelGroupLayout>
    <af:spacer width="5" height="10" id="s51213"/>
    <af:panelGroupLayout id="pgl2326" layout="horizontal" inlineStyle="width:120px;">
    <af:outputText value="#{row.CatMatlPur}" id="ot7"/>
    </af:panelGroupLayout>
    <af:spacer width="5" height="10" id="s5343"/>
    <af:panelGroupLayout id="pgldw2120" inlineStyle="width:120px;" layout="horizontal">
    <af:outputText value="Current Selection:" partialTriggers="::i1" id="ot9"/>
    <af:outputText value="#{bindings.DataCode2.inputValue}" id="ot171" partialTriggers="::i1"/>
    </af:panelGroupLayout>
    <af:spacer width="10" height="10" id="s53343"/>
    <af:panelGroupLayout id="pgee3l120" inlineStyle="width:120px;" layout="horizontal">
    <af:outputText value="#{row.CatMatlPur}" id="ot57"/>
    </af:panelGroupLayout>
    </af:panelGroupLayout>
    </af:iterator>
    Thanks
    VJ

    Hi Frank,
    Here i need some clarification.
    Actaully i have read some blogs saying that trinidad components are not suggestable to mix with ADF components as their life cycle is different..
    can we use same in ADF page?
    after adding trinidad, i got some issues like button action is not working on first click like that (anyway we solved those)..So how to resolve these type issues and Is that error is because of trinidad or not?
    can you pls clarify me?
    and below is my code for ur refererce if required.(This code is having one selectOneChoice with +,- buttons.If user clicks on '+' another selectoneChise and +,- buttons should be added to the PanelGroupLayout .and Used iterator for doing this. )
    <af:iterator id="i1" var="dynamicRow"
    value="#{viewScope.valuesBean.ccAL}"
    varStatus="dynamicIndex">
    <af:panelGroupLayout layout="horizontal"
    id="colpg_${dynamicIndex.index}"
    clientComponent="true">
    <af:selectOneChoice value="#{dynamicRow.cntValue}"
    immediate="true"
    id="soc11_${dynamicIndex.index}"
    autoSubmit="true"
    contentStyle="width:265px;"
    valuePassThru="true"
    clientComponent="true"
    valueChangeListener="#{backingBeanScope.testSearchBean.valueChangeListenerGeneric}"
    disabled="#{viewScope.readOnly.editable['fieldEditable'] eq 'N'}">
    <af:forEach var="lov1"
    items="#{bindings.purposeLOVIterator.allRowsInRange}">
    <f:selectItem id="sei1"
    itemLabel="#{lov1.codeShortName}"
    itemValue="#{lov1.codeShortName}"/>
    </af:forEach>
    </af:selectOneChoice>
    <af:inputText contentStyle="width:20px;"
    id="itcol_${dynamicIndex.index}"
    value="#{dynamicRow.qtyValue}"
    clientComponent="true" autoSubmit="true"
    valueChangeListener="#{backingBeanScope.testSearchBean.valueChangeListenerGeneric}"
    immediate="true"
    disabled="#{viewScope.readOnly.editable['fieldEditable'] eq 'N'}"/>
    <af:spacer width="5" height="10" id="s6" visible="#{viewScope.valuesBean.ccCount eq
    dynamicIndex.index+1}" ></af:spacer>
    <af:commandButton text="+"
    id="cb1_${dynamicIndex.index}" visible="#{viewScope.valuesBean.ccCount eq
    dynamicIndex.index+1}" binding="#{backingBeanScope.editBean.addCCBinding}"
    clientComponent="true"
    styleClass="dynComBtn"
    disabled="#{viewScope.readOnly.editable['fieldEditable'] eq 'N'}">
    <af:clientAttribute name="tubeName"
    value="CollectionTube"/>
    <f:attribute name="TestProcessId"
    value="#{dynamicRow.cntId}"/>
    <af:clientListener method="onAddAction" type="click"/>
    <af:serverListener type="MyCustomServerEvent"
    method="#{backingBeanScope.editBean.addAL}"/>
    </af:commandButton>
    <af:spacer width="5" height="10" id="s4" visible="#{dynamicIndex.index >0}"/>
    <af:commandButton text="-"
    id="cb2_${dynamicIndex.index}"
    visible="#{dynamicIndex.index >0}"
    clientComponent="true"
    styleClass="dynComBtn"
    disabled="#{viewScope.readOnly.editable['fieldEditable'] eq 'N'}">
    <f:attribute name="removeId" value="#{dynamicRow}"/>
    <af:clientAttribute name="tubeName"
    value="CollectionTube"/>
    <f:attribute name="delIndex"
    value="#{dynamicIndex.index}"/>
    <af:clientAttribute name="dIndex"
    value="#{dynamicIndex.index}"/>
    <af:clientListener method="onDelAction" type="click"/>
    <af:serverListener type="MyCustomDelEvent"
    method="#{backingBeanScope.editBean.delAL}"/>
    </af:commandButton>
    </af:panelGroupLayout>
    <af:spacer width="10" height="5" id="s16"/>
    </af:iterator>
    Thanks

  • Rules of thumb for sizing an Oracle BPM 11g deployment

    Anyone out there have some rules of thumb they are using to size out an environment for Oracle BPM 11g? I know processing power can vary widely for process complexity and amount of data floating around. Still, I get asked questions like how much processing power do I need for this solution? I have a current client looking at 2500+ potential human workflow users with 1000 concurrent at peak load. Short of running some performance test myself and extrapolating numbers I am at a loss. Hoping some others can chime in with some thoughts.

    an update... my rule of thumb for 50 users has proven to be a bit high. Looks to be 30-40 per core when you split out the BPMN engine from other computing intensive processes such as the ESB. Without splitting up overall SOA/BPM functionality over multiple servers concurrent user counts of various technology really takes a hit.

  • Rule of thumb for CACHE_SIZE

    Hi all,
    I do have a system outsourced, where the DB (MAXDB 7.6.05) runs on a single SLES 10 server.
    This server has got 16 GB RAM. The CACHE_SIZE is configured to 20 GB (2500000 pages) and a top shows me, that from the 40 GB swap 20 GB are used (file system cache is so low, that it can be ignored) ... and swapd runs with at least 6% CPU all the time.
    Is this a good situation? Not for me! Does anybody have a rule of thumb for CACHE_SIZE?
    I would say 75 % of RAM should be the highest value... any othe suggestion?
    Thank you!
    Christian

    Cache must always fit in the physical RAM. The purpose of the cache is to hold data in the memory that it doesn't need to be read again and again from the disk. If you now configure that cache bigger than the physical available RAM (minus application minus operating system) the system will start swapping/paging in and out and hence slowing down the full system.
    Ideally the machine is not swapping at all and all data is in the memory.
    Markus

  • Subtitling Rules of Thumb

    Hi,
    I'm trying to create some templates for doing subtitles on clips in the future. In determining how long the templates should be, from what I have viewed, it takes about 3 seconds to read 8 words. Are there any general rules of thumb?
    Thanks

    a rule of thumb often used in news is 3 words (spoken) = 1 second.
    this is for timing pre edited video to a news readers live voice over ... obviously not directly applicable to subtitling, but somewhere to start.
    cheers
    Andy

  • Question regarding the rule of thumb regarding Exchange proc cores ratio to GC proc cores

    Hi all,
    I am studying towards my MSCE Messaging for Exchange 2013 and came across on CBT nuggets a statement regarding a rule of thumb when it comes to processor cores on Exchange 2013 server to GC
    so Greg Shields from CBT nuggets says that the ratio should be
    4:1 on x86 domain controllers
    8:1 on x64 domain controllers.
    So is he talking about 1 mailbox server role with 8 processor cores to 1 processor core on a x64 domain controller? what if there is already a heap of servers using that GC? In our datacentre we have 2 x64 domain controllers but also a whole range of servers
    on that server network.
    Also what if you have 2 mailbox server roles? are we then talking 16 : 2 ?
    Hope someone can help clarify this.
    Regards
    ronnie.jorgensen systems engineer
    My blog

    Does this help explain it better?
    http://blogs.technet.com/b/exchange/archive/2013/05/06/ask-the-perf-guy-sizing-exchange-2013-deployments.aspx
    Active Directory sizing remains the same as it was for Exchange 2010. As we gain more experience with production deployments we may adjust this in the future. For Exchange 2013, we recommend deploying a ratio of 1 Active Directory global catalog processor
    core for every 8 Mailbox role processor cores handling active load, assuming 64-bit global catalog servers:
    If we revisit our example scenario, we can easily calculate the required number of GC cores required.
    Assuming that my Active Directory GCs are also deployed on the same server hardware configuration as my CAS & Mailbox role servers in the example scenario with 12 processor cores, then my GC server count would be:
    In order to sustain double failures, we would need to add 2 more GCs to this calculation, which would take us to 7 GC servers for the deployment.
    As a best practice, we recommend sizing memory on the global catalog servers such that the entire NTDS.DIT database file can be contained in RAM. This will provide optimal query performance and a much better end-user experience for Exchange workloads.
    and of course use the latest calculator version to determine your required specs:
    http://blogs.technet.com/b/exchange/archive/2013/05/14/released-exchange-2013-server-role-requirements-calculator.aspx
    When you calculate required GC cores, the ratio 1:8 applies to the cores required to support active mailboxes ("Mailbox role processor cores handling active load") and not to all cores physically available on Exchange server.
    So, for example, if you have 32 total cores on an Exchange server but only 16 of them are actually needed to support active mailboxes (as determined by the calculator), you will need only 16/8=2 GC cores to support this Exchange server's AD queries.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • How to add an index to a materialized view in Data Modeler 3.3

    Hello everyone,
    I'm looking for a how-to to add an index to a materialized view in Data Modeler 3.3.0.747, as I coudn't find a way to do it so far.
    I looked here:
    Relational Model
    Physical Model
    Oracle 11g
    Materialized Views
    "my_mv_name"
    "INDEXES" IS NOT HERE IN THE TREE
    "Tables" does not include it either
    Thank you & Best regards,
    Blama

    Hi David,
    thanks a lot. I did so and it worked, but I found a minor bug while doing so:
    I marked the table as "Implement as Materialized View" and went to File->Export->DDL (for Oracle 11g).
    The generated code (I checked all options in "Drop Selection") includes a row:
    DROP MATERIALIZED VIEW mv_mymatview CASCADE CONSTRAINTS ;
    which produces a syntax error.
    Best regards,
    Blama

  • Determination of checking Rule in sales order

    Dear all,
    how the checking rule is determined in sales order av.check, In the t code OPJJ we are maintained so many checking rule for one checking group, In the sales order how the system determines the checking rule,
    I am expecting positive feedback from your end .
    Best Regards,
    Kumar

    Hi Kumar,
    You will have to maintain the checking group default for Material Type & Plant in Customization.
    You can assign the plant & the checking rule for processing Back order Processing.
    You can set default values for the relevant sales area & the availability checking rule.
    You will have to assign the requirement class to reqirement type.
    Maintain the avbl. check & TOR for the relevant schedule line.
    These settings need to be done for the avbl. check to be performed at the sales order level.
    Hope this will help you.
    Thanks & Regards
    Krishna Mohan

  • ABAP code in update rules to convert the date

    Hi,
    Could any one send me the ABAP code that is written in the update rules to convert the date (DD/MM/YYYY  -- lenght 10) to YYYYMMDD ---  length 8  format.
    Also please let me know where I should write this code; while creating update rules or while creating infosource.
    Thanks,

    Hi Bharath,
    Hi Bharath,
    I suggest you do the conversion of dates in the transfer rules. Here is the correct code you need:
    * Assuming the source data field is called MYDATE
    * Place the ff. in the routine in the transfer rules:
    concatenate tran_structure-mydate+6(4) tran_structure-mydate+3(2) tran_structure-mydate(2) into result.
    replace MYDATE with the name of the source field (10 chars) in the transfer structure. Hope this helps.

Maybe you are looking for