Performance issues if model includes physical tables from varios DBs ?

I was questioned about the possible performance impact of having the physical model pointing to or accessing a couple of tables in different DBs and doing the join so that in the logical layer that would be considered a one dimension or one fact...
Pls. anyone could help in discussing this topic or presenting pros / cons of having such a design ? I understand that the OBIEE server is smart enough to deal with these models but I may be wrong and would appreciate any comments.
Txs.
Antonio

I was questioned about the possible performance impact of having the physical model pointing to or accessing a couple of tables in different DBs and doing the join so that in the logical layer that would be considered a one dimension or one fact...
Pls. anyone could help in discussing this topic or presenting pros / cons of having such a design ? I understand that the OBIEE server is smart enough to deal with these models but I may be wrong and would appreciate any comments.
Txs.
Antonio

Similar Messages

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • I'm facing performance issue while accessing the PLAF Table

    Dar all,
    I'm facing performance issue while accessing the PLAF Table.
    The START-OF-SELECTION of the report starts with the following select query.
        SELECT plnum  pwwrk matnr gsmng psttr FROM plaf
        INTO CORRESPONDING FIELDS OF TABLE it_tab
        WHERE matnr IN s_matnr
          AND pwwrk = p_pwwrk
          AND psttr IN s_psttr
          AND auffx = 'X'
          AND paart = 'LA' .
    While executing the report in the Quality system it does not face any performance issue...
    When it comes to Production System the above said select query itself it is taking 15 - 20 minutes time to move further.
    Kindly help me to over come this problem...
    Regards,
    Jessi

    Hi,
    "Just implement its primary Key
    WHERE PLNUM BETWEEN '0000000001' AND '9999999999' " By this you are implementing the Primary Key
    This statement has nothing to do with performance, because system is not able to use primary key or uses every row.
    Jessica, your query uses secondary index created by SAP:
    1     (Material, plant) which uses fields MANDT MATNR and PLWRK.
    but it is not suitable in your case.
    You can consider adding new one, which would containt all fields: MANDT, MATNR, PWWRK, PSTTR AUFFX PAART
    or - but it depends on number of rows meeting and not meeting (auffx = 'X' AND paart = 'LA' ) condition.
    It could speed the performance, if you would create secondary index based on fields MANDT, MATNR, PWWRK, PSTTR
    and do like Ramchander suggested: remove AUFFX and PAART from index and where section, and remove these unwanted rows
    after the query using DELETE statement.
    Regards,
    Przemysław
    Please check how many rows in production system

  • Performance Issue while Joining two Big Tables

    Hello Experts,
    We've the following Scenario, wherein we need to have Sales rep associated for a Sales Order. This information is available in VBPA table with Sales Order, Sales Order Item and Partner Function being the key.
    NOw I'm interested in only one Partner Function for e.g. 'ZP'. This table is having around 120 million records.
    I tried both options:
    Option 1 - Join this table(VBPA) with Sales Order Item table(VBAP) within the Data Foundation Layer of the Analytic View and doing the filtering on Partner Function
    Option 2 - Create a Attribute View for VBPA having filtering on Partner Function and then join this Attribute View in the Logical Join Layer with the Data Foundation table.
    Both these options are killing the performance.
    Is there any way out to achieve this ?
    Your expert opinion is greatly appreciated!!
    Thanks & regards,
    Jomy

    Hi,
    Lars is correct. You may have to spend a little bit more time and give a bigger picture.
    I have used this join. It takes about 2 to 3 seconds to execute this join for me. My data volume is less than yours.
    You must be have used a left outer join when joining the attribute view (with constant filter ZP  as specified in your first post) to the data foundation. Please cross check once again, as sometimes my fat finger inadvertently changed the join type and I had to go back and fix it. If this is a left outer join or a referential join, HANA  does not perform the join if you are not requesting any field from the attribute view on table VBPA. This used to be a problem due to a bug in SP4 but got fixed in SP5.
    However, if you have performed this join in the data foundation, it does enforce, the join even if you did not ask any fields from the VBPA table. The reason being that you have put a constant filter ZR (LIPS->VBPA join in  data foundation as specified in one of your later replies).
    If any measure you are selecting in the analytic view is a restricted measure  or a calculated measure that needs some field from VBPA, then the join will be enforced as you would agree. This is where I had most trouble. My  join itself is not bad but my business requirement to get the current value of a partner attribute on  a higher level calculation view sent too much data from analytic view to calculation view.
    Please send the full diagram  of your model and vizplan. Also, if you are using a front end (like analysis office), please trap the SQL sent from this front end tool and include it in the message.  Even a straight SQL you have used in which you have detected this performance issue will be helpful.
    Ramana

  • Performance issue of frequently data inserted tables

    Hi all,
    Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB)  and received_time(date).
    This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
    This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
    No updates will be happening on this table.
    Performance issue:N
    Need a report  which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
    Need a report  which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
    But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
    Please help me to address the above issue.

    Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
    That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
    And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
    Best regards
    Mohamed Houri
    <mod. action: removed unecessary blog ref.>
    Message was edited by: Nicolas.Gasparotto

  • Performance issue with view selection after migration from oracle to MaxDb

    Hello,
    After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
    Does anybody know about this problem and how to solve it ??
    Best regards !!!
    Gert-Jan

    Hello Gert-Jan,
    most probably you need additional indexes to get better performance.
    Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
    If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
    SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
    http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
    http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
    Best regards,
    Melanie Handreck

  • Physical Tables from multiple connection pools

    Hi all,
    we have in our RPD file two connection pools (let's say A and B), each connecting to a different source DBs.
    Thus, each physical table resides either in source DB A or B (xor).
    The specified connections work in Admin tool, and also direct database requests work in OBIEE 11G if we explicitly
    provide the correct connection pool. The connection pools are specified in order A,B in the Admin tool.
    However, using OBIEE answers always results in following error message if data from the connection pool B
    is to be queried:
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000]
    [nQSError: 10058] A general error has occurred.
    [nQSError: 43113] Message returned from OBIS.
    [nQSError: 17001] Oracle Error code: 942, message: ORA-00942: table or view does not exist at OCI call OCIStmtExecute.
    [nQSError: 17010] SQL statement preparation failed. (HY000)
    If we exchange the order of connection pools to B,A in Admin tool, we get the same error if we query
    data from connection pool A.
    It seems that each connection pool needs to be able to access all physical tables. Is that correct?
    Thanks, Thomas
    Edited by: user13376481 on Mar 4, 2013 3:08 AM

    Hi Thomas.
    I have the same escenario...
    you tried the solution? worked?
    Thanks.
    Hamilton T

  • Performance Issue's Related in Adance table in advance table

    Hi,
    Can anybody let me know what are the performance issues in advance table in advance table,because i am having big performance issue while implementing advance table in advance table, my inner table is rendering very slowly.
    Thanks

    Table in a table is a performance eating structure :), because ur VL will cache both parent and child VO rows in JVM.The only way to improve the performance is to tune your sql queries.
    --Mukul                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Add records to the physical table from Internal table

    Hello,
    I am trying to insert the records from IT table to physical table. But, its not inserting. Please let me know how do I add the records to the table zebp_iv_cf_log.
    I have used only few fields for example *
    After looping I get about 800 records in it_non_ebp tab *
    loop at non_ebp_inv.
        it_non_ebp-zspgrv = non_ebp_inv-spgrv.
        it_non_ebp-zspgrq = non_ebp_inv-spgrq.
        it_non_ebp-zspgrs = non_ebp_inv-spgrs.
        it_non_ebp-inv_ref_num = non_ebp_inv-xblnr.
        it_non_ebp-zspgrc = non_ebp_inv-spgrc.
        it_non_ebp-zlifnr = non_ebp_inv-lifnr.
        append it_non_ebp.
        endloop.
        insert   zebp_iv_cf_log from table it_non_ebp[] accepting duplicate keys .
    I also tried inserting one by one by putting insert syntex within a loop but, it takes keeps processing.
    Shall appreciate the response.
    Thks & Rgds,
    Hemal
    Edited by: hemalgandhi on Jun 12, 2009 6:27 PM

    Hi,
    for the internal table you are using for appending , you must declare it with the header line option
    as
    data it_non_ebp type table of zebp_iv_cf_log with HEADER LINE.
    or else you can have a separate workarea of type zebp_iv_cf_log
    as
    data wa_zebp_iv_cf_log type zebp_iv_cf_log.
    then the code should be like :
    loop at non_ebp_inv.
    wa_zebp_iv_cf_log -zspgrv = non_ebp_inv-spgrv.
    wa_zebp_iv_cf_log -zspgrq = non_ebp_inv-spgrq.
    wa_zebp_iv_cf_log -zspgrs = non_ebp_inv-spgrs.
    wa_zebp_iv_cf_log -inv_ref_num = non_ebp_inv-xblnr.
    wa_zebp_iv_cf_log -zspgrc = non_ebp_inv-spgrc.
    wa_zebp_iv_cf_log -zlifnr = non_ebp_inv-lifnr.
    append wa_zebp_iv_cf_log to it_non_ebp.
    clear wa_zebp_iv_cf_log .
    endloop.
    and use 
       modify zebp_iv_cf_log from table it_non_ebp accepting duplicate keys .
    instead of
       insert zebp_iv_cf_log from table it_non_ebp[] accepting duplicate keys .
    Please try out and let me know of you face any problem

  • Performance issue while accessing SWwLOGHIST & SWWHEAD tables

    Hi,
    We have applied workflow function in Purchase Order (PO) for release strategry purpose. We have two release levels.
    The performance issue came while user who tried to update the existing PO in which the PO had already released. because after user amend the PO and then they pressed "SAVE" button in PO's screen, the release strategry will reset and it tried to read the SWwLOGHIST table, it took few seconds to minutes to complete the save process.
    My workflow's schedule job details as below:
    SWWERRE - every 20 minutes
    SWWCOND - every 30 minutes
    SWWDHEX - every 3minutes
    Table Entries:
    SWWHEAD - 6mil entries
    SWwLOGHIST - 25mil entries
    Should we do data archiving on the above workflow tables?
    Is it only the solution?
    Kindly advice,
    Thanks,
    Regards,
    Thomas

    Hi,
    The sizes for both tables as below:-
    SWWLOGHIST - 3GB
    SWWWIHEAD - 2GB
    I've a REORGchk_alltables in weekly basis, in DB2/DB6 (DB) do I need to manually reorg of the tables or rebuild the indexes?
    You can refer the attached screenshots for both tables.
    Thanks,
    Regards,
    Thomas

  • Issue with creation of ADF Table from ADF Tree selection

    Hi,
    Following is the usecase.
    I've created ParentVO & ChildVO from a single table with view criteria to filter the nodes.And then created two View links ParentToChild & ChildToChild.
    Added VOs & corresponding ViewLinks to ApplicationModule. It's got created hierarchy as Parent1->Child1->Chiled2 in Data model section of AM.So Iam done with tree creation process in Model.
    As VC can't be applied for sub levels. In order to set the VCs for sublevels, followed the below approach.
    Created a bind variable for tree. I've set the VC for both parent & child VOs in managed bean before setting the tree variable in setTree method. So now Iam able to display the required tree in UI with applying VCs.
    Now , I can select the required nodes from tree and then click on command button to display the selection list as a table.
    In order to achieve this, I tried below two options.
    1) Created separate Child VO instance (Child3) from Child VO and applied same view which applied initially. and the dragged the Child3 from Datacontrol to UI(jsff) as a table. When I run the application,it's displaying all the records from the DB table without applying VCs.
    2) Dragged the Child2 as a table on UI. When I run the application, it's displaying first record from the table without applying VC.
    But no luck in getting the required functionality.
    I've Following queries.
    a) If we update any transient attribute value for an VO instance, will it effect at VO level or only for that particular instance?
    Why because, I've created new instance of same VO. But the changes are not effecting for transient attributes in the new instance of VO.
    b) Can some one suggest on my usecase to display the selected nodes from a tree in table format?
    I tried my level best to explain the usecase. But let me know,if you have any queries on my usecase.
    Thanks in advance,
    Samba.

    This is my code:
    <af:column id="c1" headerText="Sponsor Status">
    <af:selectOneChoice label="Label 2" id="soc1" value="#{row1.sponsorStatusDesc}"
    validator="#{backingBeanScope.EditSponsorDetails.OnSponsorStatusChange}"
    valuePassThru="true">
    <f:selectItems value="#{pageFlowScope.confLists.spStatus}"
    id="si1"/>
    </af:selectOneChoice>
    </af:column>
    and this i what HTML code says..
    <select id="confSponsor:r2:0:tbIEEEsp:0:soc1::content" class="x2h" name="confSponsor:r2:0:tbIEEEsp:0:soc1">
    <option _adftmpopt="t" value="" title=""></option>
    <option value="4" title="Approved">Approved</option>
    <option value="3" title="Declined">Declined</option>
    <option value="6" title="New">New</option>
    <option value="2" title="Not Valid">Not Valid</option>
    <option value="5" title="On Hold">On Hold</option>
    <option value="1" title="Pending Approval">Pending Approval</option>
    <option value="7" title="Unidentified">Unidentified</option>
    </select>
    Stll i cannot see any value populated in SelectOneChoice

  • Performance issue with Oracle Global Temporary table

    Hi
    Oracle version : 10.2.0.3.0 - Production
    We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
    in the end data required response data is again inserted into response GTT's from which Response XML is generated.
    Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
    Regards,
    Vikas Kumar

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance issues home sharing with Apple TV2 from Mountain Lion Server

    I have a Mac Mini which I have just upgraded to Mountain Lion Server from Snow Leopard Server.
    I have noticed that the performance of Streaming a film using Home Sharing to an Apple TV2 has degraded compared to the Snow Leopard setup. In fact the film halts a number of times during play back which is not ideal.
    I have tested the network between the 2 devices and cannot find a fault.
    Has anyone come across this problem before.
    Are there any diagnostic tools I can use to measure the home sharing streaming service to the AppleTV2 device.
    Any help much appreciated

    Well, I tried a few other things and one worked but again just the first time I tried connecting to the desktop PC with iTunes. I flashed my router with the latest update and the ATV2 could see the iTunes library and I was able to play media. Later in the day I was going to show off to my daughter that I had fixed it and, to my dismay, no go. I tried opening the suggested ports but no luck.
    I then tried loading iTunes on a Win7 laptop and it works perfectly with the ATV2. Both the laptop and the ATV2 are connected to the router wirelessly while the Desktop is connected to the router by Ethernet. Not sure if this is part of the issue as it sounds like this didn't work for others. The only other difference between the laptop and desktop is that the desktop has Win7 SP1 loaded while the laptop does not; now I'm scarred to load it though I don't think that's the issue. All in all, a very vexing situation. Hopefully Apple comes up with a solution soon.

  • Performance issues in modeling process chains

    Hi friends ,
      what are the steps to be taken care in modeling a process chains.. Iam talking with respect to loading performance ... The design of process chian should not hinder the loading process--  plz help. my id is [email protected]
      Regards,
    Pavan

    Hi,
    BW architecture, sizing, and data modelling
    System load analysis
    Indices and database statistics
    Business Intelligence Performance Tuning [original link is broken]
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    1) if u load data into data target, u can do directly into Data target without loading PSA.
    2) If u use Cube , if u do partion, u can improve loading performance
    Hareesh

  • Issue related to preparing Internal table from 3 more Internal tables

    Hi All,
    I have 3 internal tables declared as below.
    DATA : i_header TYPE STANDARD TABLE OF zexport_header
                    INITIAL SIZE 0 WITH HEADER LINE.
    DATA : i_class TYPE STANDARD TABLE OF zexport_class
                    INITIAL SIZE 0 WITH HEADER LINE.
    DATA: i_data_sel LIKE i_data OCCURS 0 WITH HEADER LINE.
    Now i want Declare another internal table including all the above 3 internal tables.
    Can any body tell me how i have to declare the same.
    DATA: BEGIN OF i_data_disp OCCURS 0,
          END OF i_data_disp occurs 0.
    Thanks in advance.
    Thanks & Regards,
    Rayeez.

    hi
    just have look below code
    u have to declare as <b>TYPE</b> then use in any inernal table.
    TYPES VECTOR TYPE HASHED TABLE OF I WITH UNIQUE KEY TABLE LINE.
    TYPES: BEGIN OF LINE,
    COLUMN1 TYPE I,
    COLUMN2 TYPE I,
    COLUMN3 TYPE I,
    END OF LINE.
    TYPES ITAB TYPE SORTED TABLE OF LINE WITH UNIQUE KEY COLUMN1.
    TYPES: BEGIN OF DEEPLINE,
    FIELD TYPE C,
    TABLE1 TYPE VECTOR,
    TABLE2 TYPE ITAB,
    END OF DEEPLINE.
    TYPES DEEPTABLE TYPE STANDARD TABLE OF DEEPLINE
    WITH DEFAULT KEY.
    regards
    vinod

Maybe you are looking for