Chocolate Toxicity Analyzer

Hey ppl i got this program ive been working on and its giving me a little trouble, maybe some1 looking from a diffrent perspective will see the problem.
The program runs from an in file that looks like this...
3
milk 10 20
semisweet 30 40
baker's 1 100
Here's the just of what the program is supposed to do...
The local vet clinic has asked me to write a program that, given the circumstances of a situation where a dog has consumed chocolate, determines a suggested treatmeent. I know the following facts:
On average,
Milk chocolate contains 44 mg of theobromine per oz.
Semisweet chocolate contains 150 mg of theobromine per oz.
Baker's chocolate contains 390mg of theobromine per oz.
And the suggested treatments for the amount of theobromine consumed/dog's body weight are as follows:
<20mg/kg Monitor animal's behavior.
20 - 100mg/kg Induce vomiting and administer activated charcoal. Animal may return home.
100mg/kg Induce vomiting and administer activated charcoal. Leave animal at clinic.HINT: 3.5 oz = 1mg
Input:
The first line of the input file will contain a single integer indicating the number of data sets. //in this case 3
The next lines will each contain "chocolateType ChocolateAmount DogWeight", where:
1. "ChocolateType: will be one of the following, "Milk", "Semisweet", or "Baker's".
2. "ChocolateAmount" will be an integer (1-32) indicating the amount of chocolate in oz consumed.
3. "DogWeight" will be an integer (5 - 150) indicating the weight of the dog in kg.
Output:
for each dataset, output a single line containing the suggested treatment given the parameters and above table.
Example Output To Screen
Induce vomiting and administer activated charcoal. Animal may return home.
Induce vomiting and administer activated charcoal. Leave animal at clinic. Monitor animal's behavior.
PROBLEM:
The main problem is with the first for loop. When the for loop runs for the first time it "skips" everything in it, then when it goes back up to the top to run again it does fine.
I usually dont like to post the WHOLE program but i think it would help you if u saw the whole thing. It may be a liitle confusing to what everything does but i tried to use very descriptive variables to help you. Here's the program...
//Tyler Webb
//4-4-08
//All Rights Reserved
import java.io.*;
import java.util.*;
public class cta
public static void main(String args[])throws IOException
Scanner dataFile = new Scanner(new File("C:\\cta.in"));
////////////////////////////VARIABLES///////////////////////////////////////////
//gets the value of how many times to run the 4 loop
int control = dataFile.nextInt();
int milkpoison = 44;
int semisweetpoison = 150;
int bakerspoison = 390;
int dogweight = 0;
int amountofpoison = 0;
String typeofchocolate;
int parsed = 0;
String fake = " ";
//Blood Poison Concentration
int bpc = 0;
int firstlvl = 0;
int secondlvl = 0;
int thirdlvl = 0;
//for loop to run through each line in the data file
for(int x = 0; x < control; x++)
String newDataFile = dataFile.nextLine();
StringTokenizer currentLine = new StringTokenizer(newDataFile);
while(currentLine.hasMoreTokens())
//System.out.println("hi");
//resets variables
dogweight = 0;
amountofpoison = 0;
parsed = 0;
fake = " ";
typeofchocolate = currentLine.nextToken();
fake = currentLine.nextToken();
int AmountOfConsumedChocolate = Integer.parseInt(fake);
String dogweight1 = currentLine.nextToken();
dogweight = Integer.parseInt(dogweight1);
if(typeofchocolate.equalsIgnoreCase("milk"))
//figures the amount of poison by multiplying amount consumed
//by amount of poison in each type of chocolate
amountofpoison = (AmountOfConsumedChocolate * milkpoison);
bpc = amountofpoison / dogweight;
if(typeofchocolate.equalsIgnoreCase("semisweet"))
//figures the amount of poison by multiplying amount consumed
//by amount of poison in each type of chocolate
amountofpoison = (AmountOfConsumedChocolate * semisweetpoison);
bpc = amountofpoison / dogweight;
if(typeofchocolate.equalsIgnoreCase("baker's"))
//figures the amount of poison by multiplying amount consumed
//by amount of poison in each type of chocolate
amountofpoison = (AmountOfConsumedChocolate * bakerspoison);
bpc = amountofpoison / dogweight;
System.out.println(bpc);
if(bpc < 70)
System.out.println("Monitor animal's behavior.");
else if(bpc > 350)
System.out.println("Induce vomiting and administer activated charcoal. Leave animal at clinic");
else
System.out.println("Induce vomitin and administer activated charcoal. Animal my return home.");
//end of 4 loop
}

From your question I'll have to ask: do you know how to use a debugger and step through your code?
You problem is not in your "for" statement, it's in your use of your scanner object and nextLine().
I've not used scanner enough, nor looked up the details, to know the technical aspects of your problem, but as far as I see--when you start the read, the scanner is not yet into the file so you get a blank line--""--and then you are set to read from the file on the next call to nextLine().
That's probably why for file I/O you use some kind of file reader--they don't have this problem.
BTW: by publishing your product to a public forum, you're little all rights reserved, is no longer in effect.

Similar Messages

  • Studio 12.1 Performace Analyzer 7.7 problem, with 'er_print' utility.

    I am having a minor but nagging problem with a regression in the ‘er_print’ utility of the Sun Performance Analyzer suite bundled in Studio 12. is there maybe a patch available or in the works?
    I have not had any success in finding a resolution by searching the open literature…
    The issue is that the ‘callers-callees’ listing only dumps functions in alphabetical order, ignoring the sort order specified by ‘sort’. This is a regression from the Performance Analyzer (7.4) behavior from Sun Studio 10. We only recently jumped to studio 12.1.
    This functionality is documented here: http://docs.sun.com/app/docs/doc/821-0304/afaid?a=view (as well as many other references). To quote:
    “callers-callees
    Print the callers-callees panel for each of the functions, in the order specified by the function sort metric (sort)."
    I use a script input to er_print that in the past analyzed the top ‘N’ functions sorted on inclusive thread time. Now I have to be sure to dump ALL functions and need a third-party search tool to find that information in the resulting report.
    Has anyone heard of this problem or are there Performance Analyzer patch(es) available. I saw some for 7.6 and another for unspecified but have not seen this problem in patch notices.
    Thanks.
    Regards,
    Steve

    Nik, thanks for taking a look. We can't go to 12.2 because we're a software developer and we'll lose our binary compatibility with the release we've been building for the last few months. I'm a systems guy and will paste in a developer's example below.
    Note Marc's url shows a 12.1 Performance Analyzer patch 142369-01 we have not yet installed. The patch notice description doesn't show much. I'll pass on patch info to remote user/developer.
    Developer example:
    I use a script input to er_print that in the past analyzed the top ‘N’ functions sorted on inclusive thread time. Now I have to be sure to dump ALL functions and need a third-party search tool to find that information in the resulting report.
    Here’s a shortened (only 4) example of the behavior I’m seeing… the focus of the functions in callers-callees are NOT those of the functions determined by the sort metric.
    = = = =
    sysun046% er_print /scratch/test.4.er
    /scratch/test.4.er: Experiment has warnings, see header for details
    (/opt/sunstudio12.1/bin/../prod/bin/sparcv9/er_print) sort i.total
    Current Sort Metric: Inclusive Total Thread Time ( i.total )
    (/opt/sunstudio12.1/bin/../prod/bin/sparcv9/er_print) cmetrics a.total:e.user:i.user:e.total:i.total
    Current caller-callee metrics: a.total:e.user:i.user:e.total:i.total:name
    Current caller-callee sort metric: Attributed Total Thread Time ( a.total )
    (/opt/sunstudio12.1/bin/../prod/bin/sparcv9/er_print) limit 4
    (/opt/sunstudio12.1/bin/../prod/bin/sparcv9/er_print) sample_select 22-53
    Exp Sel Total
    === ===== =====
    1 22-53 57
    (/opt/sunstudio12.1/bin/../prod/bin/sparcv9/er_print) functions
    Functions sorted by metric: Inclusive Total Thread Time
    Excl. Incl. Incl. Total Name
    User CPU User CPU Thread
    sec. sec. sec.
    26.015 26.015 113.530 <Total>
    0. 26.015 113.530 ACE_Task_Base::svc_run(void*)
    0. 26.015 113.530 ACE_Thread_Adapter::invoke()
    0. 26.015 113.530 ORB_Task::svc()
    (/opt/sunstudio12.1/bin/../prod/bin/sparcv9/er_print) callers-callees
    Functions sorted by metric: Inclusive Total Thread Time
    Callers and callees sorted by metric: Attributed Total Thread Time
    Attr. Total Excl. Incl. Excl. Total Incl. Total Name
    Thread User CPU User CPU Thread Thread
    sec. sec. sec. sec. sec.
    113.530 26.015 26.015 113.530 113.530 *<Total>
    113.530 0. 26.015 0. 113.530 lwpstart
    Attr. Total Excl. Incl. Excl. Total Incl. Total Name
    Thread User CPU User CPU Thread Thread
    sec. sec. sec. sec. sec.
    0.010 0. 0.010 0. 0.010 ACE_Message_Block::clone(unsigned long)const
    0. 0. 0.010 0. 0.010 *ACE_Data_Block::clone(unsigned long)const
    0.010 0.398 0.398 0.398 0.398 memcpy
    Attr. Total Excl. Incl. Excl. Total Incl. Total Name
    Thread User CPU User CPU Thread Thread
    sec. sec. sec. sec. sec.
    0.001 0. 0.003 0. 0.011 ACE_Select_Reactor_T<ACE_Select_Reactor_Token_T<ACE_Token> >::resume_handler(int)
    0.001 0.001 0.001 0.001 0.001 *ACE_Guard<ACE_Select_Reactor_Token_T<ACE_Token> >::release()
    Attr. Total Excl. Incl. Excl. Total Incl. Total Name
    Thread User CPU User CPU Thread Thread
    sec. sec. sec. sec. sec.
    0.010 0. 0.010 0. 0.010 TAO_Synch_Queued_Message::clone(ACE_Allocator*)
    0. 0. 0.010 0. 0.010 *ACE_Message_Block::clone(unsigned long)const
    0.010 0. 0.010 0. 0.010 ACE_Data_Block::clone(unsigned long)const
    = = = = =
    Nik, thanks for taking a look.
    Steve

  • I wonder to know what is the enterprise solution for windows and application event log management and analyzer

    Hi
    I wonder to know what is the enterprise solution for windows and application event log management and analyzer.
    I have recently research and find two application that seems to be profession ,1-manageengine eventlog analyzer, 2- Solarwinds LEM(Solarwind Log & Event Manager).
    I Want to know the point of view of Microsoft expert and give me their experience and solutions.
    thanks in advance.

    Consider MS System Center 2012.
    Rgds

  • Error when starting Bex analyzer

    Hi All!
    We have a world wide BW-SEM application. In one country they get the following error message when starting Bex analyzer:
    <install error> Missing ActiveX component: Business Explorer Global Services
    Does anyone have a hint on what to do?
    Thanks for your help!
    Best Regards
    Pontus

    hi
    try to check with SAPBEXC.xla
    and take a look oss note 585643
    may need to manual register .dll with regsvr32
    Do an Installation check of the BEx Analyzer as follows:
    In the bex analyzer menu, Business explorer -> Installation Check -> Once the excel sheet opens, press the start button to start the check. Check the entries in red to see any missing/old ocx, dlls.

  • Analyzer 6.1.1 Not showing all data in 800x600 PC screen settings

    I have a user not able to see the whole Analyzer view due to their client pc settings for screen size being set at 800x600. I created the Anazler views with my setting at 1024x768. I need the Analyzer server to make all of the views to be 100% instead of using pixels.

    Hi man. Put de border and the proprierts to ajust fit screen. So, when the user open in 800x600, the report will ajust automaticaly in the screen.Regards,Gustavo Santade

  • Satatus of Data in Bex Analyzer

    Hello ,
    is it possible to hide "status of data" in bex analyzer ?  I have read some documents but all are say it is only possible with Web Designer.
    I have a query from a Multiprovider and I don't wnt to show Status of data information in the report .
    thank you ,
    blue

    You can goto design mode in the workbook and delete the text element which is showing status of data, come out of design mode and save the workbook.
    Edited by: Pravender on May 18, 2010 2:19 PM

  • SAP BPC MS 7.5 with Extended Analytic Analyzer and EPM connector

    Hi experts,
    I need your inputs regarding Extended Analytic Analyzer add ins.
    I installed the SAP Business Object Extended Analytic Analyzer hoping to integrate the Xcelsius in SAP BPC MS 7.5
    I am following the HTG to integrate but got lost.
    In EPM connector steps, I cannot find the option from OPERATION TYPE: Retrieve data using Analyzer Report.
    The only options available under operation type are
    EPM Report
    Retrieve Environments
    Retrieve Models
    Retrieve Dimensions
    Retrieve Dimension Members
    Input Data
    Retrieve Business Process Flows
    Retrieve Context
    Retrieve Members Property Values
    RetrieveText From Library
    It doesn't include the option Retrieve data using Analyzer Report.
    Im wondering if there are different version of the epm connector? Was my EPM connector differs from the HTG?
    And also in Excel under Extended Analytic Analyzer, the function =GETREPORTDEFINITION() is missing
    Please help me on this guys..
    Thanks in advance.
    yajepe

    It seems a very good oportunity to use FIM.
    FIM was designed especially for exchange data betweeen different SAP product.
    FIM will provide an easy way to do the conversion using wizards and also will assure you about data inetgrity and quality.
    This will be the way forward but more details has to be define during the implementation.
    I hope this will help you.
    Kind Regards
    Sorin Radulescu

  • "analyze index"  vs  "rebuild index"

    Hi,
    I don't undestand the difference between "analyze index" and "rebuild index".
    I have a table where I do a lot of "insert" and "update" and "query", What is the best thing to do ??
    thanks
    Giordano

    When you use dbms_stats.gather_schema_stats package with cascade=>true option, you are also collecting stats for the indexes, no need to collects stats separately using dbms_stats.gather_index_stats.Of course, but I refered to the rebuild index question. Therefore I only mentioned the GATHER_INDEX_STATS.
    Auto_sample_size has many problems/bugs in 9iOk didn't know that - I'm using 10gR2.
    But this discussion made me curious. So I tried something (10gR2):
    CREATE TABLE BIG NOLOGGING AS
    WITH GEN AS (
    SELECT ROWNUM ID FROM ALL_OBJECTS WHERE ROWNUM <=10000)
    SELECT V1.ID,RPAD('A',10) C FROM GEN V1,GEN V2
    WHERE ROWNUM <=10000000;
    SELECT COUNT(*) FROM BIG;
    COUNT(*)
    10000000
    So I had a Table containing 10 Million rows. Now I indexed ID:
    CREATE INDEX BIG_IDX ON BIG(ID)
    I tested two different methods:
    1.) GATHER_TABLE_STATS with estimate 10%
    EXEC DBMS_STATS.GATHER_TABLE_STATS(TABNAME=>'BIG',OWNNAME=>'DIMITRI',CASCADE=>TRUE,ESTIMATE_PERCENT=>10);
    It took about 6 seconds (I only set timing on in sqlplus, no 10046 trace) Now I checked the estimated values:
    SELECT NUM_ROWS,SAMPLE_SIZE,ABS(10000000-NUM_ROWS)/100000 VARIANCE,'TABLE' OBJECT FROM USER_TABLES WHERE TABLE_NAME='BIG'
    UNION ALL
    SELECT NUM_ROWS,SAMPLE_SIZE,ABS(10000000-NUM_ROWS)/100000 VARIANCE,'INDEX' OBJECT FROM USER_INDEXES WHERE INDEX_NAME='BIG_IDX';
    NUM_ROWS SAMPLE_SIZE VARIANCE OBJEC
    9985220 998522 ,1478 TABLE
    9996210 999621 ,0379 INDEX
    2.) GATHER_TABLE_STATS with DBMS_STATS.AUTO_SAMPLE_SIZE
    EXEC DBMS_STATS.DELETE_TABLE_STATS(OWNNAME=>'DIMITRI',TABNAME=>'BIG');
    EXEC DBMS_STATS.GATHER_TABLE_STATS(TABNAME=>'BIG',OWNNAME=>'DIMITRI',CASCADE=>TRUE,ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE);
    It took about 1,5 seconds. Now the results:
    NUM_ROWS SAMPLE_SIZE VARIANCE OBJEC
    9826851 4715 1,73149 TABLE
    10262432 561326 2,62432 INDEX
    The estimate 10% was more exact - also a 1,7 and 2,6 percent variance is still ok. It's also very interesting, that using AUTO_SAMPLE_SIZE
    causes oracle to execute a estimate 5% for the index and a estimate 0.5 for the table.
    I tried again with a table containing only 1 Million records and oracle did an estimate with 100% for the index.
    So for me I will continue using AUTO_SAMPLE_SIZE. Its very flexible, fast and accurate.
    Dim
    PS: Is there a way to format code like one can do in HTML using <code> or <pre>?

  • Rebuild index vs Analyze index

    Hi All,
    I am realy confused about rebuilding index versus Analyzing index.
    Could anyone plz help me out what is the diffrence between them.
    How to Perform analyze of indexes and Rebuld of Indexes for both Oracle 9i and 10g databases.
    Thanks a lot

    CKPT wrote:
    You can see the posts of experts by jonathan
    I am realy confused about rebuilding index versus Analyzing index. tell us you are getting confused why we need to ananlyze before reubild index? if so
    if index analyzed the whole statistics of index will be gathered.... then you can check what is the hieght of the index.. according to the height of the index you need to take step is index need to be really rebuild or not...
    lets see furhter posts from experts if not clear..Thanks OK, so you determine the height of an index is (say) 4. What then ? If you decide to rebuild the index and the index remains at a height of 4, what now ? Was it really worth doing and do you rebuild it again as the index height is still 4 and still within your index rebuild criteria ? At what point do you decide that rebuilding the index just because it has a height of 4 is a total waste of time in this case ?
    OK, so you determine the index only has a height of (say) 3, does that mean you don't rebuild the index ? But what if by rebuilding the index, the index now reduces to a height of just 1 ? Perhaps not rebuilding the index even though it has just a height of 3 and doesn't currently meet your index rebuild criteria is totally the wrong thing to do and a rebuild would result in a significantly leaner and more efficient index structure ?
    So what if it's pointless rebuilding an index with a height of 4 but another index with a height of 3 is a perfect candidate to be rebuilt ?
    Perhaps knowing just the height of an index leaves one totally clueless after all as to whether the index might benefit from an index rebuild ...
    Cheers
    Richard Foote
    http://richardfoote.wordpress.com/

  • Decimal Point in Analyzer 6.2.1

    Hi all,Is it possible to set on server Analyzer 6.2.1 the decimal symbol and digit grouping symbol for the format of numbers ?When it has to be done the export to Excel, the number format are taken from Analyzer server. If the Regional Options from the client part for decimal symbol and digit grouping symbol are different from Analyzer server, the export in Excel it does not work properly. Is possible to set the export to Excel to take the number settings from the client part ?

    As far as I know this shouldn't be the case. The Java applet should take local client settings period and not be affected by how the server is set. You should ensure that you have the 'international' version of the Sun Java Plugin installed and not the UK/US version. Hope this helps.Paul Armitage.Analitica Ltd.www.analitica.co.uk

  • Long time to start Java Web Client (Analyzer 6.2.1)??

    Does anyone know why Analyzer(6.2.1) takes a long time to start Java Web Client. Sometime it's even take more than 5 minutes. I think it is the Java Plug-in starting on the client computer. Any solution?

    The key to Analyzer 6.2.1 running correctly is the version of Sun Java Plugin. The ideal version (most optimal) is 1.3.0_02.Secondary to this if Analyzer performs OK once you are logged in then it could be down to the speed of your connection. The applet compiles at runtime (unlike Analyzer 5 which was a one time download). The delay in getting to the login screen could be this download.Hope this helps.Paul ArmitageAnalitica Ltd.www.analitica.co.uk

  • Statpack analyzing of 9i database.

    hi Expertise
    Please help me for sorting the statpack report of my production DB in 9i. Also advise some recommendation after analyzing my statpack view.
    Elapsed:     3.75 (min)     225 (sec)
    DB Time:     7.84 (min)     470.65 (sec)
    Cache:     10,016 MB     
    Block Size:     8,192 bytes     
    Transactions:     2.01 per second     
    Performance Summary
    Physical Reads:     15,666/sec          MB per second:     122.39 MB/sec     
    Physical Writes:     22/sec          MB per second:     0.17 MB/sec     
    Single-block Reads:     1,412.69/sec          Avg wait:     0.03 ms     
    Multi-block Reads:     1,916.26/sec          Avg wait:     0.05 ms     
    Tablespace Reads:     3,346/sec          Writes:     22/sec     
    Top 5 Events
    Event     Percentage of Total Timed Events
    CPU time     79.89%
    PX Deq: Execute Reply     6.38%
    db file scattered read     4.32%
    SQL*Net more data from dblink     4.29%
    db file sequential read     2.00%
    Tablespace I/O Stats
    Tablespace     Read/s     Av Rd(ms)     Blks/Rd     Writes/s     Read%     % Total IO
    TS_CCPS     3,117      0     2.5      0      100%     92.5%
    TS_OTHERS     204      0.2     26.2      1      99%     6.09%
    TS_AC_POSTED03     19      1.9     127      2      89%     0.63%
    Load Profile
    Logical reads:     42,976/s          Parses:     39.41/s     
    Physical reads:     15,666/s          Hard parses:     5.43/s     
    Physical writes:     22/s          Transactions:     2.01/s     
    Rollback per transaction:     0%          Buffer Nowait:     100%     
    4 Recommendations:
    Your database has relatively high logical I/O at 42,976 reads per second. Logical Reads includes data block reads from both memory and disk. High LIO is sometimes associated with high CPU activity. CPU bottlenecks occur when the CPU run queue exceeds the number of CPUs on the database server, and this can be seen by looking at the "r" column in the vmstat UNIX/Linux utility or within the Windows performance manager. Consider tuning your application to reduce unnecessary data buffer touches (SQL Tuning or PL/SQL bulking), using faster CPUs or adding more CPUs to your system.
    You are performing more than 15,666 disk reads per second. High disk latency can be caused by too-few physical disk spindles. Compare your read times across multiple datafiles to see which datafiles are slower than others. Disk read times may be improved if contention is reduced on the datafile, even though read times may be high due to the file residing on a slow disk. You should identify whether the SQL accessing the file can be tuned, as well as the underlying characteristics of the hardware devices.
    Check your average disk read speed later in this report and ensure that it is under 7ms. Assuming that the SQL is optimized, the only remaining solutions are the addition of RAM for the data buffers or a switch to solid state disks. Give careful consideration these tablespaces with high read I/O: TS_CCPS, TS_OTHERS, TS_AC_POSTED03, TS_RATING, TS_GP.
    You have more than 1,222 unique SQL statements entering your shared pool, with the resulting overhead of continuous RAM allocation and freeing within the shared pool. A hard parse is expensive because each incoming SQL statement must be re-loaded into the shared pool; with the associated overhead involved in shared pool RAM allocation and memory management. Once loaded, the SQL must then be completely re-checked for syntax & semantics and an executable generated. Excessive hard parsing can occur when your shared_pool_size is too small (and reentrant SQL is paged out) or when you have non-reusable SQL statements without host variables. See the cursor_sharing parameter for an easy way to make SQL reentrant and remember that you should always use host variables in you SQL so that they can be reentrant.
    Instance Efficiency
    Buffer Hit:     69.13%          In-memory Sort:     100%     
    Library Hit:     96.4%          Latch Hit:     99.99%     
    Memory Usage:     95.04%          Memory for SQL:     64.19%     
    2 Recommendations:
    Your Buffer Hit ratio is 69.13%. The buffer hit ratio measures the probability that a data block will be in the buffer cache upon a re-read of the data block. If your database has a large number of frequently referenced table rows (a large working set), then investigate increasing your db_cache_size. For specific recommendations, see the output from the data buffer cache advisory utility (using the v$db_cache_advice utility). Also, a low buffer hit ratio is normal for applications that do not frequently re-read the same data blocks. Moving to SSD will alleviate the need for a large data buffer cache.
    Your shared pool maybe filled with non-reusable SQL with 95.04% memory usage. The Oracle shared poolcontains Oracle´s library cache, which is responsible for collecting, parsing, interpreting, and executing all of the SQL statements that go against the Oracle database. You can check the dba_hist_librarycache table in Oracle10g to see your historical library cache RAM usage.
    SQL Statistics
    Click here to see all SQL data
    Wait Events
    Event     Waits     Wait Time (s)     Avg Wait (ms)     Waits/txn
    PX Deq: Execute Reply     137     30     219     0.3
    db file scattered read     431,159     20     0     951.8
    SQL*Net more data from dblin     51,140     20     0     112.9
    db file sequential read     317,856     9     0     701.7
    io done     6,842     5     1     15.1
    db file parallel read     21     1     52     0.0
    local write wait     250     1     4     0.6
    db file parallel write     825     1     1     1.8
    SQL*Net message from dblink     208     1     3     0.5
    log file parallel write     2,854     1     0     6.3
    0 Recommendations:
    Instance Activity Stats
    Statistic     Total     per Second     per Trans
    SQL*Net roundtrips to/from client     87,889     390.6     194.0
    consistent gets     10,141,287     45,072.4     22,387.0
    consistent gets - examination     884,579     3,931.5     1,952.7
    db block changes     100,342     446.0     221.5
    execute count     18,913     84.1     41.8
    parse count (hard)     1,222     5.4     2.7
    parse count (total)     8,868     39.4     19.6
    physical reads     3,525,003     15,666.7     7,781.5
    physical reads direct     539,879     2,399.5     1,191.8
    physical writes     5,132     22.8     11.3
    physical writes direct     29     0.1     0.1
    redo writes     1,598     7.1     3.5
    session cursor cache hits     4,378     19.5     9.7
    sorts (disk)     0     0.0     0.0
    sorts (memory)     4,988     22.2     11.0
    table fetch continued row     310     1.4     0.7
    table scans (long tables)     82     0.4     0.2
    table scans (short tables)     18,369     81.6     40.6
    workarea executions - onepass     0     0.0     0.0
    5 Recommendations:
    You have high network activity with 390.6 SQL*Net roundtrips to/from client per second, which is a high amount of traffic. Review your application to reduce the number of calls to Oracle by encapsulating data requests into larger pieces (i.e. make a single SQL request to populate all online screen items). In addition, check your application to see if it might benefit from bulk collection by using PL/SQL "forall" or "bulk collect" operators.
    You have 3,931.5 consistent gets examination per second. "Consistent gets - examination" is different than regular consistent gets. It is used to read undo blocks for consistent read purposes, but also for the first part of an index read and hash cluster I/O. To reduce logical I/O, you may consider moving your indexes to a large blocksize tablespace. Because index splitting and spawning are controlled at the block level, a larger blocksize will result in a flatter index tree structure.
    You have high update activity with 446.0 db block changes per second. The DB block changes are a rough indication of total database work. This statistic indicates (on a per-transaction level) the rate at which buffers are being dirtied and you may want to optimize your database writer (DBWR) process. You can determine which sessions and SQL statements have the highest db block changes by querying the v$session and v$sessatst views.
    You have high disk reads with 15,666.7 per second. Reduce disk reads by increasing your data buffer size or speed up your disk read speed by moving to SSD storage. You can monitor your physical disk reads by hour of the day using AWR to see when the database has the highest disk activity.
    You have high small table full-table scans, at 81.6 per second. Verify that your KEEP pool is sized properly to cache frequently referenced tables and indexes. Moving frequently-referenced tables and indexes to SSD or theWriteAccelerator will significantly increase the speed of small-table full-table scans.
    Buffer Pool Advisory
    Current:     3,599,469,418 disk reads     
    Optimized:     1,207,668,233 disk reads     
    Improvement:     66.45% fewer     
    The Oracle buffer cache advisory utility indicates 3,599,469,418 disk reads during the sample interval. Oracle estimates that doubling the data buffer size (by increasing db_cache_size) will reduce disk reads to 1,207,668,233, a 66.45% decrease.
    Init.ora Parameters     
    Parameter     Value     
    cursor_sharing     similar     
    db_block_size     8,192     
    db_cache_size     8GB     
    db_file_multiblock_read_count     32     
    db_keep_cache_size     1GB     
    hash_join_enabled     true     
    log_archive_start     true     
    optimizer_index_caching     90     
    optimizer_index_cost_adj     25     
    parallel_automatic_tuning     false     
    pga_aggregate_target     2GB     
    query_rewrite_enabled     true     
    session_cached_cursors     300     
    shared_pool_size     2.5GB     
    optimizercost_model     choose     
    1 Recommendations:
    You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.

    Systemwide Tuning using STATSPACK Reports [ID 228913.1] and http://jonathanlewis.wordpress.com/statspack-examples/ should be useful.

  • Single Sign On and BeX Analyzer

    Hello All,
    Does anyone know of a way of using windows authentication (via Active Directory) to automate the login prompt in the BeX Excel Analyzer? I have found a solution for the BI portal via SPNego, but have not been able to find any discussion or documentation about automating the BeX Excel Analyzer login prompt. Any help is greatly appreciated.
    Thanks, --Matt

    Hi Derick,
    I want to make our discussion into 2 parts
    1) Sign on
    2) Viewing data based on the Heirarchy
    1)Before discussing about the Sign on i want to know which connectivity you are using ? Live offcie or QaaWS.
    2) We can make the second point possible in two ways One is with providing restriction at universe level
    and the other one is through the use of flash variables.
    Using flash variables:
    The main idea of using flash variables is reading the User ID from BO authentication and based on that we fetch the Heirarchy level of that user. Then we use some excel logic to hide the data from Low level heirarchy(Here we use Dynamic Visibility for components).
    I hope this is what you ar looking for....
    If so i have more points to acheive such scenario.
    Please provide the your BO environment details, such that it will be easy to identify the better best wat to acheve it.
    Regards,
    AnjaniKumar C.A.

  • How to change Analyzer user password with Administration API?

    Hi,<BR>I would like to change Analyzer user password with Administration API. Can someone post some sample commands to do the task?<BR><BR>I would just like to write an application to change end user's Analyzer password.<BR>As I see I would need to do the following:<BR>1. login with admin userid/password<BR>2. execute some method to change password for required userid. I think the input parameter should be userid (of the user I would like to change password) and new password (the new password for the user).<BR>3. logout<BR><BR>Can someone post some sample code (commands to execute)?<BR><BR>Thanks,<BR>grofaty<BR><BR>My system:<BR>Analyzer Server 7.0.1.<BR>Essbase server 7.1<BR>Windows XP SP2<BR>

    <blockquote>quote:<br><hr><i>Originally posted by: <b>knightrich</b></i><BR>Hello Mr. Jordan.<BR><BR>I would like to exchange some thoughts about "housekeeping" Analyzer reports in preparation for migration from Analyzer 7.0.0.0.01472 to 9.x:<BR><BR>...<BR><BR>Did you solved such a problem or do you have an idea if it could be solved with the Admin API methods?<BR> ...<BR>Migration from 7.00 to 9.x: As we heard last week the "Migration Wizard for Reports" in 9.3 should be able to migrate reports. Do you have experience or more detailed information about that Wizard?<BR><BR>Many thanks in advance<BR><BR>knigthrich<hr></blockquote><BR><BR>knighrich, <BR>I'd like to be more help, but I have no experience with System 9. I did substantial cleanup when we migrated from Analyzer 6 to Analyzer 7.1, and even more cleanup when moving up to 7.2, but our installation is smaller in scale than yours and we didn't need to automate report cleanup.<BR><BR>You might be able to get the ownership information you need through the back door, doing a direct query on the database, but simpler might be an export users, at least from 7.0. (This facility probably doesn't exist in system 9; it was dropped in 7.2 in favor of an undocumented API) The export file is an xml file that could easily be parsed to identify reports that have the administrator as user and then a second pass to delete those with otuer ownership as well. As previously suggested, you might be able to get this by a well crafted SQL query against the repository.<BR><BR>Procedurally, we have both public reports that have the blessing of management and are widely available, owned by a "public owner", and private reports developed by indivdual users and shared or not. Our team maintains the public reports, but not the private reports. We may be asked to make a previously private report public and take over maintenance of it. <BR><BR>I hope that you can find a solution that meets your needs. Certainly a call to customer support to identify a poorly documented feature would be in order.<BR>

  • Difference between bexbrowser and Bex analyzer

    Hi,
       can any body tell me  what is the difference between bex browser and bex analyzer  and how end users will access the reports and how they access SAP.

    Hi
    *BEx Web Analyzer *
    The BEx Web Analyzer is a standalone, convenient Web application for data analysis that you can call using a URL or as an iView in the portal.
    The Web Analyzer allows you to execute ad hoc analyses on the Web: When you have selected a data provider (query, query view, InfoProvider, external data source), the data is displayed in a table with a navigation pane. You can navigate to the data and use other Web Analyzer functions available in the application toolbar. For example, you can change the type of data display, use the information broadcasting functions to broadcast your analyses to others, and create printable versions of your analyses.
    In the Web Analyzer, you can save the data view generated from navigation and analysis as a query view by choosing Save View in the context menu, and you can save the ad hoc analysis by choosing Save As. When the query view is saved, only the data view is saved; when the ad hoc analysis is saved, the entire Web application is saved, including the properties of Web items and the layout of the data.
    Check the link for more info
    http://help.sap.com/erp2005_ehp_03/helpdata/EN/00/e8d13f7fb44c21e10000000a1550b0/frameset.htm
    Bex Browser
    The Business Explorer Browser (BEx Browser) makes it possible for you to access all document types of the Business Information Warehouse that are assigned to your role or that you have stored in your favorites. You can select and open documents assigned to you in the BEx Browser or store and manage new documents in the BEx Browser.
    Document types that you can work with in the BEx Browser are:
    ·        BW workbooks
    ·        Documents that are stored in the Business Document Service (BDS)
    ·        Links (references to file system, shortcuts)
    ·        Links to internet sites (URLs)
    ·        SAP transaction calls.
    ·        Web applications and Web templates
    ·        Crystal Reports
    Regards
    Shilpa

Maybe you are looking for

  • OCI error In using UTF16 mode in begining a session

    I wrote this code : OCIEnv* envhp; OCIError* errhp; OCIServer* srvhp; OCISvcCtx* svchp; OCISession* usrhp; envhp = (OCIEnv *) 0; errhp = (OCIError *) 0; srvhp = (OCIServer *) 0; svchp = (OCISvcCtx *) 0; usrhp = (OCISession *) 0; int mode = OCI_DEFAUL

  • How do I get my music from iPhone on iPad 2

    Just bought ipad2, how do I get my music bought from iTunes currently on computer and iPhone to show up in music app on iPad

  • Enhancement in f-48

    I want to display a warning message while executing transaction f-48 if the line item is more than one for a particular vendor . If the line item is more than 2 then i have to show an error message. Can someone help me in this regard.

  • InDesign CS5.5 Export Error: Small fonts

    When exporting to a PDF I get a Background Task alert: Small Fonts: This font could not be embedded because it is missing its font outline file. Adobe Acrobat requires the outline file to be installed for correct viewing and printing of this file. Ca

  • Issues with trying to use the updated iTunes

    So I updated iTunes on my computer and I plugged my iPod into my computer but iTunes does not open up so I opened iTunes up manually and it's so confusing but my iPod doesn't show up and I don't know how to use the updated iTunes. I really need help