Usage of queries

How to find if a specific query/ Workbook is being Used / looked up ?
How many times/ how often is it used ?
How to find out the most used and least used query .
Thanks.

Query Usage
Please do not post the same question on multiple threads.

Similar Messages

  • Table for model characteristic usage in queries

    Hi does anyone know the name of the table(s) which I can establish the following:
    1.  I have a multicube with say 15 dimensions consisting of a total of 35 Dimension characteristics
    2. There are say 6 navigational attributes switched on for reporting
    3. There are say 10 key figures
    4. There are say 65 queries built off this info provider
    What I would like to establish is in the 65 queries which elements in 1, 2 and 3 have been used.  I am aware of the RSZ* tables but was not able to find what I am looking for.  Thanks in advance

    Hi Anand,
    I find this is relevant to your query and go through this...
            The names of the structural components are language dependent. When you define and save a query with the logon language as German, only the corresponding German texts are saved in the relevant tables (for example, RSZELTTXT). This is not the original key figure, but rather a structural component, a new object that linksto the original key figure but is not the original key figure.
    ·        If you then execute the query in a different logon language to the one that you used to define the query, the standard texts are displayed for all structural components that have no text.
    ·        Structures and structural components (selections, formulas) are completely independent objects that appear in the BEx Query Designer and that you can store in the relevant tables when you save them. Since texts are fundamentally language-dependent, the texts in the query definition appear in the logon language only when you save the query.
    regards,
    rudra.
    Assign points if helpful...

  • BW Document repository usage for Queries

    Hi all,
    We aim to use the BW 'document' functionality to link traing web sites and documentation (based on web server) to InfoObjects.
    After prototyping on our 3.5 system the following questions came up:
    1. Document can be linked to meta-, master data and Infoproviders. When activating all 3 display options in the BEx Query properties where can the Infoprovider related documents be seen and accessed? The other 2 document types create a symbol next to the InfoObject!
    2. Various types of docuemnts can be linked (Word, PDF, etc.) in case you want to simply want to link a URL how does this work?
    3. Who implemented a similar concept and how did you set it up (which of the 3 document types above where used)?
    What were your learnings and Best Practices?
    Thanks for all feedback,
    Markus

    1) Metadata - have to get it using document web item. Transaction data will have links appear against the key figures when there is a document matching the charactersitics in display.
    2) URLs also have to be linked in documents or as text in a nromal text document.
    3) If a user has access to documents, they have all access. It is not possible to control what docs a user can change. Potentially, any user can go in and change document created by any other user.
    Also, I don't there is functionality to upload documents through the web.
    Cheers
    Aneesh

  • Change Management Process for SAP Queries

    Our company has recently implemented SAP.  We are having some struggles with agreeing on a process for developing queries in SQ01.  Our functional specialists currently have to create a query in DEV200 then download to DEV100 to create a transport. 
    Is this the general practice?  It seems very strange and way too cumbersome considering SAP Query was designed for quick access to data.  As for security the tables are already protected by the roles assigned to the users and the queries are assigned to User Groups within the query.
    It also seems that HR uses the queries for all kinds of data searching so access to SQ01 to create a quick query in PRD seems appropriate.

    Hi Karen,
    I have seen some companies severly restrict query writing and usage because queries, if not written well, can seriously degrade system performance.
    A potential rationale for your company's approach could be to test the query's efficacy and resulting system performance when a query is run.  However, this kind of test won't be very accurate unless you do frequent refreshes of your production system.
    I have been with some companies who do write queries in PRD, but the ability to write queries is limited to a very few number of people and they wind up becoming strictly query writers which somewhat defeats the purpose.
    A lot of the answer to how your company should approach query writing is going to depend on your landscape, who has access to write queries, and if that access is to information that crosses all functions or is limited to a smaller set of data.
    If your company has always had information dissemination controlled by IT, i.e. users have historically had to go to a central group to get a report, then there will be cultural changes needed as well as training if the user population should write queries directly in PRD.
    Regards,
    Julie

  • BADI Usage in Query for variables

    Hi,
         How to implement BADI in query for variables instead of classes in BI 7.0?
    if anybody come across BADI usage in queries means please provide me the steps.

    Hi,
        i need to implement BADI for queries.
    My scenario:
            To implement different logics for different variables in a query, is this possible to implement it in a single BADI definition. (BI 7.0)
    currently i had implemented with different different BADI definitions for different logics.
    please reply as soon as possible.

  • When to use prepared statement in java?

    Hi all,
    If we have query like select * from {tablename} (Here tablename is the variable) then will the use of prepared staement improve the performance? My guess is use of statement in this case is a better option rather than prepared statement as the tablename may change for every other query.I think are not useful if tablename changes.They are useful only when the where clause has dynamic values.

    cantor wrote:
    Are you sure that your approach is possible? The next example causes exception for me.
    PreparedStatement ps = conn.prepareStatement("select * from ?");
    ps.setString(1, "TABLE_NAME");
    ps.executeQuery();
    I didn't say that he should solve it in that way. He should create one prepared statement like this "select a, b, c from tablename1" and another prepared statement when he wants to execute "select d, e, f from tablename2"
    >
    And as I understand, this code will not improve perfomance (and even will not work). As I said it can.
    Prepared statements make possible usage compiled queries inside DB. When DB gets prepared statement call, it need to parse it only once. And it can collect some statistic to improve execution plan. But when table name is not specified, this approach will not work.Yes it might. There are database drivers that can cache prepared statements and it's isn't likely that he executes that query on more than e.g. 20 tables?
    The database can also cache compiled statements in a cache.
    Kaj

  • Once the aggregated cube how to run the query

    hai ,
    i had cube havind lot of data .
    so i was used aggregation .
    after that how to run  query from aggragated cube
    when ever i went to rrmx . but it has showing not aggregated cube.
    once aggregate the cube where is stored
    plz let me know

    InfoCube aggregates are <b>separate database tables</b>.
    Aggregates are more summarized versions of the base InfoCube.  There is an aggregate fact table, e.g.  /BIC/E1#####  ===>  /BIC/E100027.  If you don't automatically compress your aggregates, there would also be an F fact table /BIC/F100027,
    There are aggregate dimension tables that are also created, e.g. /BIC/D1000271.  If a dimension for the aggregate is the same as the base InfoCube, then there is no aggregate dimension table for that dimension and the queries will use that dimension table from the base cube.
    As long as the agggregate is active, the BW automatically will use it instead of the base cube as long as the aggregate contains all the characteristics necessary to satisfy the query. 
    You can verify the aggregate's usage by looking at info in table RSDDSTAT - it will show the Aggregate number if used (will not show aggregate usage for queries on a MultiProvider if you are on a more recent Svc Pack).
    You can also run the query thru RSRT, using the Exec & Debug option - check the "Display aggregate found" option and it will display what aggregate(s) it found and which one(s) it used.

  • Query Compexity

    I have now determined the last usage of queries built off an info provider.  Lets assume that with this information I have established that there are 50 queries which I am interested in.
    Without opening the design of the query is there any way I can determine the complexity of these queries for example from basic drill downs to ones which include complex calculated and restricted key figures?
    Thanks

    Hi Niten,
    You can just goto transaction RSRT and execute your query.There you can find the dependencies,drilldowns etc of the query opened.This is the way how you will execute with out opening Query designer or analyser
    Regards
    Karthik

  • SQ01 Queries usage tool

    Hi all,
                    Is there any tool to find out recently used SQ01 Queries.
    i.e can we find if any queries are not being used since long.

    Hi Vidya,
    I am afraid there is no such tool as a one-click solution.
    However there are other options that might help you.
    1) For example you could check out whether the report is already generated.
    To find out the report name for a query go to Query->More functions->Display Report name. If it isn't generated the query was not executed since the last change or system copy if you look in a Test system.
    2) Also there are BASIS tools that can show the usage of transactions and reports. Using those you might be able to find out more. Ask your Basis colleagues for more info.
    Best,
    Ralf

  • Queries/Workbook USAGE

    Hi,
      My task is to find out least used queries & workbooks so we can clean up unused queries & workbook.  I ran BW STATS queries on 0BWTC_C10 and able to see some information on queries but I can't find any information for Workbooks like usage, Last Used date etc. from BC queries that are build on BW statistics Info Cubes & Multi provider.
    Where we can find these type of information??
    I would appreciate if some one can helo on this issue as it is very urget and need to give status by end of today.
    Thanks
    BI Cosultant

    hi,
    for last used of workbook you can try check in table RSRWBINDEX,  /BI0/APERS_BOD00 field TCTTIMSTMP
    determine the workbooks
    table with list of work books
    hope this helps.

  • Usage Based Optimization Wizard not finding Queries for Selected Measure Group (2008)

    I have configured my analysis services server to capture queries in an OLAP Query Log table. The connection string points to a table on a SQL Server relational database.  I have verified that the dbo.OlapQueryLog table was created. I have set it to
    capture every 5th query.  The table has been collecting query statistics for several hours. I opened my model in BIDS, clicked on the cube and opened the Aggregations tab.  I click on the measure group partition and select the Usage Based Optimization
    button. The wizard opens up and I click next.  In the "Specify Query Criteria" dialog, there is a warning that there "are no queries in the log for the selected measure group".  The problem is that there are queries but the UBO
    Wizard doesn't seem to be finding them.  Do I need to set a property in the BIDS to point to the location of the query log file in the relational database?
    davidg12

    Hi David,
    Is your SSAS database name and ID are same? We came across same kind of issue while ago. We initially created our database with one name and later renamed the database.
    The database name and ID should be same in order to perform Usage Based Optimization.
    To do so,
    On the solution explorer the top node is the database.
    Right click on that node and click edit database. In the property windown you should be able to see ID. The ID and the name of the database should be the same. 
    If both are not same, Change the database name to match ID.
    Please mark as Answer if this helps!
    Rajasekhar.

  • I have some queries usage of RAID 1 on OSX

    Hi there
    I apologise for the length of this question, but there's quite a bit of relevant data. I work in a small typesetting company (<10 employees) and since a HDD failed on an old G4 iMac that we used as a data archive we have learnt that without our data we are nothing. I have now been asked to implement a more reliable back up/recovery system (with minimum downtime) that increases the 'safety' of our customer's data. Rebuilding the archive took days to sort out and because the archive was constantly accessed by our production macs and a couple of PC as well, it hit us hard.
    We are looking into turning an old G5 Powermac (1.8Gz single processor 2.5Gb RAM) into a RAID 1 (mirrored) system with 2 x 1TB SATA drives using Disk Utility. We would have this to protect against one HD failing and losing customers' data. I understand that this is not a back up system, just something that would protect customer's data should a drive critically fail. I have some queries regarding how this works though.
    1) How much (if any) maintenance is required on a RAID system? None of use are IT specialists - just typesetters with a bit of technical knowledge.
    2) Is a mirrored RAID system reliable, considering multiple people are reading/writing to the machine throughout the day?
    3) Do RAID 1 systems handle being accessed by different OSs (WinXP, Win7, OSX 10.4–10.6) well?
    4) Am I right in thinking that OSX would see the two drives as separate volumes?
    5) Should one drive fail and need to be 'rebuilt' via Disk Utility, can users still access the one working HD, or do you need to replace the failed HD and rebuild before anyone can access the data again?
    6) Considering Question 4 above, do we need to have a 3rd 'spare' 1TB Drive just in case?
    7) We are looking into a two-week backup system, backing up all the customers data on a daily basis, with the previous week's disk being stored off-site. We were initially looking into either Carbon Copy Cloner or RSync the copy data to external HDDs. How would you rate Time Machine against these products, and does anyone have any experience using these solutions with RAIDed Macs?
    8) Is there any 'downside' to RAID 1 systems?
    I know this is a lot of questions, but I really don't want to start down this route unless I understand it better first.
    Many thanks in advance for your contributions!

    Mac_fool wrote:
    Thanks for your reply!
    So, in order to have a RAID visible and accessible by multiple OSs, and to eliminate downtime during rebuilds, a hardware RAID would be necessary.
    Well, not quite. When you create a RAID in Disk Utility, the volume only exists as a MacOS X file system. Your Mac can share that volume to any other machines or operating systems.
    What I mean by a hardware RAID is some box whose output port is eSATA or USB or some other non-network storage port. On such a device, the logic to create the file system is inside the box. Any machine that is connected to it would see only a single disk. You would still have to ensure that different operating systems could understand whatever file system you were using (HFS+, FAT, NTFS, etc), but that is the same as if you had a non-RAID external drive.
    I have one of those. It is an older FireWire 800 drive that has two 300 GB disks inside. To any machine I connect that to, it appears as a single 600 GB drive. This particular device isn't designed for mirroring, however. If it were, it would only show itself as a 300 GB hard drive and I would have the capability to easily swap out the internal mechanisms when they fail.
    By 'hardware RAID', do you mean inserting a RAID controller card inside the G5, or a separate storage device, such as a RAIDed NAS-type-thing? Would an ethernet-connected NAS be slower to access than a G5 with internal SATA? Am I right in thinking that those RAID-ready NAS devices cannot be partitioned at all?
    I qualified the above to explicity avoid NAS devices just to simplify things.
    I'm not familiar with RAID controller cards. My guess is that they are just a cheaper alternative to an external RAID device. It would allow you to plug your own hard drives into the controller card and create the RAID. They would certainly provide higher performance that if the operating system were handling the details.
    If you want to have a RAID for data reliablity, you really need an external, self-contained device that has a standard hardware storage port you could plug into any machine. It would look and act just like any other external hard drive. But if a drive failed, a light would flash on the box and you could just pull out the failed hard drive and replace it with a new one. You wouldn't even need to power anything down.
    For both RAID controller cards and stand-alone boxes, you may or may not need additional software to partition and format the drive. It depends on the model and manufacturer.
    Such a device could also be NAS. In this case, it would have an ethernet interface instead (or in addition). The difference is that such an interface would almost always be slower than a true storage interface like Firewire or USB. Plus, you would usually have to let the box itself handle its partitioning and formatting. Since it is designed to be a networked drive, you don't need another machine to be the server. It is its own server. This is useful in those cases where you don't want/need a dedicated server. The downside is that it probably runs some lowest-common denominator Windows networking for maximum compatibility. Foreign, networked filesystems are always going to be more flaky. Some software may not work. Some OS upgrade may give you hassles.
    To be honest, I'm no expert on RAIDs. In such situations, I usually just defer to people I know are experts. So, just buy the amount one of these: (http://eshop.macsales.com/shop/hard-drives/RAID/Desktop/), plug it in, and go. I suggest RAID 5.

  • Usage of a specific infoobject in Queris/Workbooks

    Hi everybody.
    I'm going to add some nav. attribute to 0vendor infobject, and than
    I need to update many queries in which 0vendor exists by replacing the
    0vendor by the new nav. attribute.
    I need to now in which workbooks/queries that are in use (in the last 30 days)
    I have to make the changes.
    Please help !!!
    Thanks
    M.B.

    I'm affraid you'll have to dig into some tables... here are a few that might be usefull
    RSRREPDIR (you can find all queries linked to a Cube or get at least the guid of a query)
    RSZELTXREF (with the guid you found before, you can retrieve ALL the query elements)
    RSZELTDIR (gives more details about the elements... )
    it'll depend whether you want to look on variables on the InfoObject or just the use of the InfoObject in rows/columns how far you have to dig... obviously you can dig the other way

  • Usage of joins & sub-queries

    can anyone help me know the situation when to use the joins and sub-queries manipulating with more than one tables???? which among the two is faster normally???? because we can do same process using both these options.....

    Balasubramaniam:
    I think I can be so bold to say that, in general, joins are faster than sub-queries. But there are a lot of variables - data types, row/table sizes, indexes.
    Try it and compare, with your production data.
    Tom Best

  • Report Developer Control Of Applying Hints to Analytics Queries

    There are numerous ways to apply hints to the queries generated by Analytics:
    - Table level
    - Join level
    - Evaluate calculation
    Each has its advantages and drawbacks.
    - Table level: applies the hint to every query that references the table.
    - Join level: applies the hint whenever the join is used in the query.
    - Evaluate: allows the report developer to include a hint, but can't control where Analytics decides to apply the hint.
    I propose another method for the report developer to apply hints, when needed, that uses join level hints. All the report developer
    does is add the hint column to the Answer or add a filter based on the hint column to the Answer to apply the hint.
    Setup
    NOTE: I suggest you do consistency checks along the way, especially before starting work in the next Layer, to be sure that all setup errors are resolved early.
    1) Start by defining a Logical SQL table in the Physical Layer using the following SQL: Select 1 Hint from dual
    2) Alias this table for each hint to be defined for report developer usage. As an example, alias the hint table, creating
    No Star and Parallel alias tables.
    3) Join each alias to the physical layer fact tables, using a complex join, where the hint could be applied. In the Join definition screen, put the hint in the HINT field and enter 1=1 for
    in the Expression section. Yes, we are creating a cartesian join between the hint and the other joining table. As the hint table always returns one and only one row, there
    is no effect on the rows returned by the query. For No Star, you
    put NO_STAR_TRANSFORMATION in the Hint field. For Parallel, you put PARALLEL(<physical table name>, default, default), where the physical table name
    is the name of the actual database table, not the name of the alias table (Analytics will put the alias in the place of the database table name
    when it generates the SQL). Additionally, for hints that have no parameters, you only need to join it
    to the Fact tables in a query and not necessarily the dimensions. If you include fields from multiple fact tables, the hint will be applied
    for each fact table. So, you may see the hint multiple times in the SQL (something like SELECT /*+ NO_STAR_TRANSFORMATION NO_STAR_TRANSFORMATION */ t00001.col1...)
    4) Add the hint alias tables to the BMM Layer.
    5) Rename the Hint field in each of the BMM hint tables to identify the hint being applied. For No Star, change the column name from Hint to No Star Hint. For Parallel,
    change the column name from Hint to Parallel Hint.
    6) Set the hint column as a key.
    7) Join the BMM hint tables to the appropriate fact tables, using a complex join.
    8) Define each hint table as a dimension.
    9) Set the Logical Level in the Content tab in each of the sources of the joined tables to use the Detail of the hint dimension.
    10) Create a folder in the Presentation Layer called Hints
    11) Place each BMM hint field into the Presentation Layer Hints folder.
    To apply a hint to your Answer, either add the Hint field to your Answer or create a filter where the Hint field is equal to/is in 1 (the number one). Check that the SQL generated
    contains the hint, in Answers, go into Administration, Session Manager, and view the log for the user (the user log level will need to have been set to 7 to see the SQL generated).
    Use of hints in more complex setups can be done by performing a setup of the hints that is parallel to the fact table setup. As an example, if you specify fragmentation content and a where
    clause in your BMM for your fact tables, you would setup parallel physical layer hint tables and joins, BMM objects and joins, fragmentation content, and where clauses based on the
    hint tables (only hint tables are referenced in the where clause).
    As any database person knows, hints can either help or degrade the performance of queries. So, taking the SQL of the pre-hint Answer and figuring out which hints give the best
    performance is suggested, prior to adding the hint fields/filters to the Answer.

    Hi Oliver,
    I would suggest you to have a look at the below WLST script which would give you the required report of the active threads and it would be send an email too.
    Topic: Sending Email Alert for Threads Pool Health Using WLST
    http://middlewaremagic.com/weblogic/?p=5433
    Topic: Sending Email Alert for Hogger Threads Count Using WLST
    http://middlewaremagic.com/weblogic/?p=5423
    Also you can use the below script in case of the stuck threads, this script would send you an email with the thread dumps during the issue occurred.
    Topic: Sending Email Alert For Stuck Threads With Thread Dumps
    http://middlewaremagic.com/weblogic/?p=5582
    Regards,
    Ravish Mody

Maybe you are looking for

  • How do I avoid re-entrant problems in Oracle JVM

    I have a Java program called LanguageFunctions. This Java program has several methods that translate the data within a String or CLOB. I want to use this Java program as a stored procedure, and specifically the methods as Oracle functions. I want thi

  • How is the best way to remove something from a photo?

    How is the best way to remove something from a photo?

  • Sy-ucomm

    hi, may i know how to write the below syntax. if the first sy-ucomm is 'CON' and the following sy-ucomm is 'SAVE' then only will continue the processing. the first sy-ucomm happens is when a dialog box asking for confirmation. after user click a butt

  • VM Server 2.1.1 3.1.1 compatibility

    Hello, I currently have a version 2 Oracle VM server and I might want to upgrade this system to the latest version. My question is though, if I transfer all my VM images to an external storage, will they be usable without changes on the new system? O

  • Q'S struck  in ECC 6.0

    Hello Guru's We have a Q struck in ECC 6.0 in which  the Q name R3AD_OBJCL destination CRM, status waitupdate  i want to know wht is this Q all abt , and what r the reason why it  is struck in ECC How to over come this q Thanks in advance Regards Sre