Understand Oracle statistics gathering

Hi experts,
I am new in Oracle performance tuning. can anyone tell me what the mean of "Oracle statistics gathering" in simple words/way. i has read it from Oracle site http://docs.oracle.com/cd/A87860_01/doc/server.817/a76992/stats.htm.
But i am not understand it properly. It Any role in oracle performance tuning? Does it make good performance of Oracle DB???
Reg
Harshit

Hi,
You can check this in some Easy way :ORACLE-BASE - Oracle Cost-Based Optimizer (CBO) And Statistics (DBMS_STATS)
>> It Any role in oracle performance tuning? Does it make good performance of Oracle DB???  :Yes
HTH

Similar Messages

  • Statistics gathering

    Hello ,
    Every one I'm little confuse about "Statistics gathering" in ebs so I have some question in my mind which are following.
    kindly any one clear my concept about it.I really appreciate you.
    1.What is Statistics gathering ?
    2. What is the benefit of it?
    3.Can after this our ERP performance is better?
    one question is out this subject is that can If any one wanna APPS DBA then who must be DBA(oracle 10g,9i etc) or who only have a concept of oracle dba like backup,recovery,cloning etc.
    Regards,
    Shahbaz khan

    1.What is Statistics gathering ?
    Statistics gathering is a process by which Oracle scans some or all of your database objects (such as tables, indexes etc.) and stores the information in objects such as dbal_tables, dba_histograms. Oracle uses this information to determine the best execution path for statements it has to execute (such as select, update etc.)
    2. What is the benefit of it?
    It does help the queries become more efficient.
    3.Can after this our ERP performance is better?
    Typically, if you are experiencing performance issues, this is one of the first remedies.
    one question is out this subject is that can If any one wanna APPS DBA then who must be DBA(oracle 10g,9i etc) or who only have a concept of oracle dba like backup,recovery,cloning etc.I will let Hussein or Helios answer that question. They can offer a lot of helpful advice. You can also refer to Hussein's recent thread on a similar topic.
    See Re: Time Management and planned prep
    Hope this helps,
    Sandeep Gandhi

  • How to check the progress of statistics gathering on a table?

    Hi,
    I have started the statistics gathering on a few big tables in my database.
    How to check the progress of statistics gathering on a table? Is there any data dictionary views or tables to monitor the progress of stats gathering.
    Regds,
    Kunwar

    Hi all
    you can check with this small script.
    it lists the sid details for long running session like
    when it started
    when last update
    how much time still left
    session status "ACTIVE/INACTIVE". etc.
    -- Author               : Syed Kaleemuddin_
    -- Script_name          : sid_long_ops.sql
    -- Description          : list the sid details for long running session like when it started when last update how much time still left.
    set lines 200
    col OPNAME for a25
    Select
    a.sid,
    a.serial#,
    b.status,
    a.opname,
    to_char(a.START_TIME,' dd-Mon-YYYY HH24:mi:ss') START_TIME,
    to_char(a.LAST_UPDATE_TIME,' dd-Mon-YYYY HH24:mi:ss') LAST_UPDATE_TIME,
    a.time_remaining as "Time Remaining Sec" ,
    a.time_remaining/60 as "Time Remaining Min",
    a.time_remaining/60/60 as "Time Remaining HR"
    From v$session_longops a, v$session b
    where a.sid = b.sid
    and a.sid =&sid
    And time_remaining > 0;
    Sample output:
    SQL> @sid_long_ops
    Enter value for sid: 474
    old 13: and a.sid =&sid
    new 13: and a.sid =474
    SID SERIAL# STATUS OPNAME START_TIME LAST_UPDATE_TIME Time Remaining Sec Time Remaining Min Time Remaining HR
    474 2033 ACTIVE Gather Schema Statistics 06-Jun-2012 20:10:49 07-Jun-2012 01:35:24 572 9.53333333 .158888889
    Thanks & Regards
    Syed Kaleemuddin.
    Oracle Apps DBA
    Mobile: +91 9966270072
    Email: [email protected]

  • Oracle Data Gatherer

    Hello everyone,
    I installed Oracle 8.1.5.0.1 on Linux Redhat 6.1. I start the
    Inteligent agent, discover the service from the entreprise
    manager console on a WIN98 station, but i can't start the data
    gatherer.
    The output from:
    vppcntl -start is:
    The Oracle Data Gatherer is runing.
    vppcntl -ping is:
    The Oracle Data Gatherer is not runing.
    When i use the console debug option the output is:
    DBG: Error: lmsagbf returned error in vp_get_catmsg(), msgid =
    5016
    DBG: error
    DBG: vppdgerr: ERROR: Error: unable to open file
    DBG: Error: lmsagbf returned error in vp_get_catmsg(), msgid =
    5001
    DBG: error
    DBG: vppdgerr: ERROR: Error: unable to open data cartridge
    registry file
    DBG: Error: lmsagbf returned error in vp_get_catmsg(), msgid =
    5085
    DBG: error
    DBG: vppdgerr: ERROR: Error: failed to get DG cartridge
    information
    DBG: Error: lmsagbf returned error in vp_get_catmsg(), msgid =
    5773
    DBG: error
    DBG: vppdgerr: ERROR: Error: failed to initialize state
    information during startup
    The error 5001 is explained as follows:
    05001, 0, "Error: unable to open data cartridge registry file"
    // *Cause: an error occurred attempting to open the cartridge
    registry file (svppcart.dat)
    // *Action: ensure the file exists in the directory $OHOME/odg
    and that it has read access
    The file exist in the specified directory and has read access.If
    i delete the file the error is the same.The content of the file
    is:
    ODB ALL vpxodb vpxodb
    SOL ALL svpxos svpxos
    If somebody has an ideea please tell me.
    Thenks!
    Costel

    The data gatherer is not usable on Linux with the 8.1.5 release.
    There is an existing bug (1121827) for this issue. You really
    need to upgrade to 8.1.6 or 8.1.7 as everything works much
    better on Linux with these versions.

  • Cgi statistics gathering under 6.1 and Solaris 9

    Hello all,
    is it possible to log for cgi requests the value for handling each of the time spent on the request?
    I see a lot of editable parameters in the 'Performance, Tuning and Scaling Guide' but can't figure out how to do that.
    Once in a thread I read "...enable statistics gathering, then add %duration% to your access log format line".
    I can't find the terminus %duration% in the guide, which parameter is taken?
    Regards Nick

    Hello elvin,
    thanks for your reply. Now I think I managed to let the webserver log the duration of a cgi request, but I'm unsure how to interpret the value eg. in the access log I get
    ..."GET /cgi/beenden.cgi ... Gecko/20040113 MultiZilla/1.6.3.1d" 431710"
    ..."GET /pic.gif ... Gecko/20040113 MultiZilla/1.6.3.1d" 670"
    so the last value corresponds to my %duration% in the magnus.conf.
    431710 ... in msec? - makes no sense
    670 ... in msec?
    The complete string in magnus.conf reads as follows:
    Init fn="flex-init" access="$accesslog" format.access="%Ses->client.ip% - %Req->vars.auth-user% [%SYSDATE%] \"%Req->reqpb.clf
    -request%\" %Req->srvhdrs.clf-status% %Req->srvhdrs.content-length% \"%Req->headers.user-agent%\" \%duration%\""Regards Nick

  • RSRV: Oracle statistics info table for table /BIC/FCube1 may be obsolete

    Hi,
    I run RSRV on cube1 and got several yellow lines such as:
    Oracle statistics info table for table /BIC/FCube1 may be OBSOLETE
    Oracle statistics info table for table /BIC/SSalesOrg may be OBSOLETE
    Oracle statistics info table for table /BIC/DCube1P may be OBSOLETE
    DB statistics for Infocube Cube1 must be recreated.
    I read here on SDN that running Correct Error in RSRV is only a temporay fix and that the best solution is to fix it from the database level with BRCONNECT.
    But the DBA says she has already run BRCONNECT yet there was no change in these several of the lines which came out as Yellow.  ... still same OBSOLETE messages.
    1. Any additional sugestions to fix these problems at the database level?
    2. In case I decide to fix with Correct Error in RSRV, what issues can I encounter with the cube?
    Can this lead to a failure of the cube?
    Will users encounter any issues with report?
    Does fixing the OBSOLETE in the error message in RSRV have any hazzards?
    Thanks

    Hi,
    it is years of data but how do you decide that the data is huge to warrant creating a new cube?
    You noted that
    "verify if it makes sense to copy the data into a new cube "
    How do I verify that?
    Is creating a new cube the only solution to this OBSOLETE problem?
    Why is it referring to only particular tables as OBSOLETE and doesn't that indicate that this is not a problem with the overall cube?
    Thanks

  • Setting of Optimizer Statistics Gathering

    I'm checking in my db setting and database is analysing each day. But as I notice there are a lot of tables that information shows last analysis in about month ago... Do I have to change some parameters?

    lesak wrote:
    I don't have any data that show you that my idea is good. I'd like to confirm on this forum that my idea is good or not. I've planned to make some changes to have better performance of query that read from top use tables. If this is bad solutions it's also important information for me.One point of view is that your idea is bad. That point of view would be to figure out what the best access for your query is and set that as a baseline, or figure out what statistics get you the correct plans on a single query that has multiple plans that are best with different values sent in through bind variables, and lock the statistics.
    Another point of view would be to gather current plans for currently used queries, then do nothing at all unless the optimizer suddenly decides to switch away from one, then figure out why.
    Also note the default statistics gathering is done in a window, if you have a lot of tables changing it could happen that you can't get stats in a timely fashion within the window.
    Whether the statistics gathering is appropriate may depend on how far off histograms are from describing the actual data distribution you see. What my be appropriate worry for one app may be obsessive tuning disorder for another. 200K rows out of millions may make no difference at all, or may make a huge difference if the newly added data is way off from what the statistics make the opitmizer think it is.
    One thing you are probably doing right is to recognize that tuning particular queries may be much more useful than obsessing over statistics.
    Note how much I've used the word "may" here.

  • Understanding Oracle Clinical

    Hi, I am new to the Oracle Clinical.
    Please any one let me know where i will get all the information about the Oracle Clinical 4.5.1. Basically, I need to understand the Table Structure of Oracle Clinical or Understanding Oracle Clinical* material. Thanks
    Edited by: user10606991 on Nov 14, 2009 4:00 AM
    Edited by: user10606991 on Nov 14, 2009 4:01 AM

    Hi, Thanks for the link Satish Pachipulusu but it explains only the interface of the Oracle Clinical. I need to understand the Table Structure of Oracle for example what are the objects created by the Oracle Clinical automatically when DVG, Question, Questions Groups, DCMs, DCI's are created? like this.
    Please help in this regard. Thanks.

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • Table Statistics Gathering Query

    Hey there,
    I'm currently getting trained in Oracle and one of the questions posed to me were create a table, insert a million rows into it and try to find the number of rows in it. I've tried the following steps to solve this,
    First table creation
    SQL> create table t1(id number);
    Table created.Data insertion
    SQL> insert into t1 select level from dual connect by level < 50000000;
    49999999 rows created.Gathering statistics
    SQL> exec dbms_stats.gather_table_stats('HR','T1');
    PL/SQL procedure successfully completed.Finally counting the number of rows
    SQL> select num_rows from user_tables where table_name='T1';
      NUM_ROWS
      49960410
    SQL> select count(*) from t1;
      COUNT(*)
      49999999My database version is,
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - ProductionI would like to know why there are two different results for the same table when using "num_rows" from the view "user_tables" and the aggregate function "count()" over the same table. Please do keep in mind that i'm studying oracle and this is from a conceptual point of view only. I would like to know how gathering the table statistics works using dbms_stats package works.
    Thank You,
    Vishal

    vishm8 wrote:
    Gathering statistics
    SQL> exec dbms_stats.gather_table_stats('HR','T1');
    PL/SQL procedure successfully completed.I would like to know why there are two different results for the same table when using "num_rows" from the view "user_tables" and the aggregate function "count()" over the same table. Please do keep in mind that i'm studying oracle and this is from a conceptual point of view only. I would like to know how gathering the table statistics works using dbms_stats package works.
    Thank You,
    VishalBecause you aren't specifying a value for estimate_percent in the procedure call (to gather_table_stats) Oracle will pick an estimate value for you. If you want to sample the entire table you would need to explicitly specify that in your procedure call.
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_stats.htm#ARPLS68582

  • Statistics gathering in 10g - Histograms

    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    Following one is post from Oracle forums.
    Statistics
    In the above post Mr Lewis mentions about adding
    method_opt => 'for all columns size 1' to the DBMS job
    And in the same forum post Mr Richard Foote has mentioned that
    "Not only does it change from 'FOR ALL COLUMNS SIZE 1' (no histograms) to 'FOR ALL COLUMNS SIZE AUTO' (histograms for those tables that Oracle deems necessary based on data distribution and whether sql statements reference the columns), but it also generates a job by default to collect these statistics for you.
    It all sounds like the ideal scenario, just let Oracle worry about it for you, except for the slight disadvantage that Oracle is not particularly "good" at determining which columns really need histograms and will likely generate many many many histograms unnecessarily while managing to still miss out on generating histograms on some of those columns that do need them."
    http://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/
    Our environment Windows 2003 server Oracle 10.2.0.3 64bit oracle
    We use the following script for our analyze job.
    BEGIN DBMS_STATS.GATHER_TABLE_STATS
    (ownname => ''username'', '
    'tabname => TABLE_NAME
    'method_opt => ''FOR ALL COLUMNS SIZE AUTO''
    'granularity => ''ALL'', '
    'cascade => TRUE, '
    'degree => DBMS_STATS.DEFAULT_DEGREE);
    END;
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.
    I would appreciate any suggestions, insight or further readings regarding the same.
    Appreciate your time
    Thanks
    Niki

    Niki wrote:
    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.As the author of one of the posts cited, let me make some comments. First, I would always recommend starting with the defaults. All to often people "tune" their dbms_stats call only to make it run slower and gather less accurate stats than if they did absolutely nothing and let the default autostats job gather stats in the maintenance window. With your dbms_stats command I would comment that granularity => 'ALL', is rarely needed and certainly adds to the stats collection times. Also, if the data has not changed enough why recollect stats? This is the advantage of the using options=>'gather stale'. You haven't mentioned what kind of application your database is used for: OLTP or data warehouse. If it is OLTP and the application uses bind values, then I would recommend to disable or manually collect histograms (bind peeking and histograms should not be used together in 10g) using size 1 or size repeat. Histograms can be very useful in a DW where skew may be present.
    The one non-default option I find myself using is degree=>dbms_stats.auto_degree. This allows dbms_stats to choose a DOP for the gather based on the size of the object. This works well if you dont want to specify a fixed degree or you would like dbms_stats to use a different DOP than the table is decorated with.
    Hope this helps.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Statistics gathering error

    Hi all,
    I am running on AIX  version 5.3 with oracle 10.2.0.1 database.
    Since yesterday I am encountering errors when gathering statistics from table partitions that already have data in it. I was able to gather without errors for years but then suddenly I got the following errors:
    exec dbms_stats.gather_table_stats('BLP', 'ADJUSTMENT_TRANSACTION', 'ADJUSTMENT_TRANSACTION_P201311', GRANULARITY=>'PARTITION')
    BEGIN dbms_stats.gather_table_stats('BLP', 'ADJUSTMENT_TRANSACTION', 'ADJUSTMENT_TRANSACTION_P201311', GRANULARITY=>'PARTITION'); END;
    ERROR at line 1:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "SYS.DBMS_STATS", line 13044
    ORA-00942: table or view does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 1
    I also got the following errors in the alert_logs:
    ORA-00600: internal error code, arguments: [KSFD_DECAIOPC], [0x7000004FF189780], [], [], [], [], [], []
    The other day the alert_log generated this error when generating statistics also for another table:
    ORA-01114: IO error writing block to file 1001 (block # 4026567)
    ORA-27063: number of bytes read/written is incorrect
    IBM AIX RISC System/6000 Error: 28: No space left on device
    As I checked, the server has sufficient space.
    Do you guys have any idea what could be the problem? I can't generate table statistics as of the moment due to this problem.
    Regards,
    Tim

    Hi Suntrupth,
    BLP@OLSG3DB  > show parameter filesystemio_options
    NAME                                 TYPE        VALUE
    filesystemio_options                 string      asynch
    BLP@OLSG3DB  > show parameter disk_asynch_io
    NAME                                 TYPE        VALUE
    disk_asynch_io                       boolean     TRUE
    No invalid objects where returned also:
    BLP@OLSG3DB  > select object_name from dba_objects where status='INVALID' and owner='SYS';
    no rows selected
    Regards,
    Tim

  • Statistics gathered during import

    Does after IMP import the tables are analyzed automatically?
    When i done import process of TEST user
    imp system/manager file=/home/oracle/test.dmp FROMUSER=TEST TOUSER=TEST
    and when i execute following query
    SELECT table_name,last_analyzed FROM DBA_TABLES WHERE owner=TEST'
    I find that table/indexes are analyzed automatically.
    Does it means statistics are gathered automatically during import?
    Oracle 10.2.0.1

    Refer to this link please
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#i1020893
    It seems it takes statistics for tables during export
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com

  • Understanding Crawl Statistics

    Hi,
    I set up SES web crawler to crawl some internal sites. At the end of the crawl I received following statistics:
    Total number of documents fetched = 982
    Total number of documents updated = 1,051
    Document fetch failures = 13
    Document conversion failures = 0
    Total number of unique documents indexed = 56
    Total data collected = 4,787,947 bytes
    Total number of non-indexable documents = 926
    Average size of fetched document = 85,499 bytes
    Documents discovered = 1313
    Documents rejected = 318
    I could not understand:
    1. Why certain number of documents were non indexable (as crawler has already rejected certain documents)
    2. What is the difference between 'Document fetch failure' and 'documents rejected' (Documents rejected depends on certain types of url's not accepted by crawler)
    3. What is the criterion for document rejection (I know a few images, audio, video are not accepted)? Is there any other criterion? I mean can a crawler plug in implementation has this capability and is it expected from crwler plugin to do so? I know while creating a source we can define what all url's to reject.
    4. In crawler logs I get error like: "EQG-30173: 'Document url' Error reading document content" ? What could be the possible reason for this?
    Thanks in advance.
    Shakti

    Non-indexable documents are generally formats which we don't index (eg jpg, gif files).
    Rejected documents are usually those which are outside the boundary conditions for the crawl.
    For example if you start a crawl at http://www.oracle.com/ then any links to (for example http://www.peoplesoft.com/ or even http://files.oracle.com/ will be outside the boundary. Check the second tab on the edit source pages.
    The EQG-30173 error normally means that the web server has failed to respond, or has sent invalid information. This should correspond with the "Document fetch failures" - 13 documents in this case.

  • Understanding Oracle BI Mobile App

    In ["New Features" of BIP 11.1.1.5 |http://www.oracle.com/technetwork/middleware/bi-publisher/bip-11115-newfeatures-395605.pdf] you can read:
    Native and Web App for iPad & iPhone
    +More and more business tasks are performed on mobile devices like iPhone and iPad, so why not browse reports and interact with your data on your mobile device with BI Publisher? Simply access your BI Publisher reports with Safari or other browsers on your iPad or iPhone, or install the Oracle BI Mobile App for iPhone and iPad to enjoy the same rich, fast interactive reporting experience as on your desktop or laptop.+
    I tried to connect the BI Mobile App (Ipad) with my BIP 11.1.1.6 installation with no success:
    Error 404--Not Found
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    10.4.5 404 Not Found
    Am I wrong with my understanding? Why I can't connect?
    What's the minimal setup?
    Or do I need to install the (whole) BI Suite?
    The "New Features" List makes me think that BI Publisher Standalone is sufficient!?
    Thank you,
    Florian.
    Edited by: fschulze on May 7, 2012 2:37 PM

    Hi
    Importing apps into the Apperian platform is very easy. There is no SDK or other limitation that is required for the app to work once deployed with Apperian. Inboarding is a very easy and quick process that requires the binary and some metadata information from the administrator and takes a couple of minutes max.
    Apperian also provides a Publishing API (public) that enables one-button integration from a development platform into Apperian. While Oracle BI Mobile App Designer is not yet integrated that way, it is a very quick process that took very little effort with other platforms.
    Once inboarded, any process to integrate the app with company policies (via inspection of the app, applying security or adoption policies, integration with corporate identity management, etc.) is done without having to rebuild the app. All in all, it is a very easy process to follow.
    Hope this helps!

Maybe you are looking for

  • Anyone know the best resoluton for images in DPS folios?

    Do I recall hearing somewhere that they are automatically sized appropriately so as to avoid contributing to unnecessarily large folios? Thanks in advance.

  • Creating an IBase Component WITH Connection Object (ISU_CONNOBJ)

    Hello I want to create an IBase Component with relation to a Connection Object. I am able to create the first one (IBase Component) but I do not know how to create a Connection Object for the newly created IBase Component. I am doing everything using

  • Programs restarting without being prompted

    I have Microsoft Word and Safari on my computer. Whenever I log out and log back in they start up without being prompted. It also happens when I restart my computer. I have forced quit but I don't know why this is happening. Any help is appreciated!

  • Removal of Photoshop CS6 Beta

    I installed Photoshop CS6 Beta and I just removed it using the uninstaller under Applications/Utility/Adobe Installers/Adobe Photoshop uninstaller alias. I had to find it myself as the information on the Web is wrong. Obviously I am on a Mac (Lion 10

  • How do you make an image appear by clicking on a link?

    I have a div set up as a box for my  images. Underneath I have put a several divs, each of which have numbers in them...01.....02....03....04 I would like to know how to make it so that when you click on 01, an image appears in the box, when you clic