Physical / Logical CPUs help

Hello there. Im a bit confused about what Physical / Logical CPUs really means. I ran 3dmark06 to test out my new build and it was very disapointign. My motherboard is the P6N sli board. The processor is the x6800 and not overclocked. I did update to the newest bios. It seems only one of the cores is being recognized. Here is what my results look like :
Below is the result details of your submitted project.
Main Test Results
3DMark Score 4293 3DMarks
SM 2.0 Score 1829 Marks
SM 3.0 Score 1835 Marks
CPU Score 1267 Marks
Detailed Test Results
Graphics Tests
1 - Return to Proxycon 14.81 FPS
2 - Firefly Forest 15.671 FPS
CPU Tests
CPU1 - Red Valley 0.398 FPS
CPU2 - Red Valley 0.646 FPS
Any help thanks

Ok updated sig to new build. Ya i understand that. Its just wierd because this is what it reads about my cores:
Processor: Intel Core 2 2931 MHz
Physical / Logical CPUs: 1 / 1
MultiCore: 2 Processor Cores
HyperThreading: Available Disabled
Graphics Card: NVIDIA GeForce 7900 GT
Graphics Driver: NVIDIA GeForce 7900 GT/GTO
Co-operative adapters: No
DirectX Version: 9.0c
System Memory: 2048 MB
Disk Space: 156.33 GB
Motherboard Manufacturer: MSI
Motherboard Model: MS-7350
Operating System: Microsoft Windows XP
CPUZ telling me i only have one core as well.

Similar Messages

  • Physical/Logical model that implement oracle database

    Dear Forum's member,
    I need documentation where I can see a grafical or a figure of the Physical/Logical relationship model that implement the Oracle object internally. It's not an specific example. It's about how oracle store the objects that it manage and are they related to the others.
    Best regards,
    José Guillén

    I once used Erwin Design tool which can help in your task. I do not know if this is still around, since I used it in 2005.

  • How to Create Primary DB and Physical/Logical Standby DB on the same host?

    Now I encounter a issue. I want to create one Primary DB and one Physical standby DB and one Logical standby DB on the same host.
    Create this env on the same host aims to test whether we can using EM Patching DP to apply patches on Primary/Physical/Logical DB successfully.
    I try to setup this env but fails. I want to know more related issues about create Primary DB /Physical DB/Logical DB on the same host and how to configure between them.
    Below steps is my try:
    1. Create Primary DB on the /scratch/primary_db
    2. Create Physical Db software only on the /scratch/physical_db
    3. Create Logical Db software only on the /scratch/logical_db
    4. Using EM Wizard to create physical standby database and logical standby database, and these two targets can show up on the "All Targets" Page.
    5. But when using EM Patching DP, it fails and the reason is the listener of physical and logical db cannot configured well.
    Issues:
    So I want to know about how to configure physical db and logical db's listener using EM or manually?
    If the listener name of Primary Db is LISTENER and the port is 1521,and the listener.ora is under the /scratch/primary_db/network/admin directory, then how to config physical db and logical db's listener's name and port?

    Hi,
    As this a test case then you need to create two more listener for each Oracle Home (/scratch/physical_db & /scratch/logical_db) make sure that they have different names and ports.
    Then add the new listeners manually using GC?
    Try it and let me know
    Regards
    Amin

  • Exporting & Importing Physical & Logical schemas, data servers, agents

    Hi,
    I am using ODI 10g.
    I want to export Physical & Logical schemas, data servers, agents from my ODI test environment and to import them to ODI production environment.
    My requirement is to do this export import through some scripts instead of doing it manually.
    Please guide for this.
    Thanks,
    Divya

    Hi Divya,
    Personally i feel rather than exporting individual components/objects, its suggested to export master repository as such.
    You can make use of ODI utility OdiExportMaster (under <ur package>->Tools->Oracle Data Integrator Objects) for exporting and Import Master Repository wizard (All Programs->Oracle-> Oracle Data Integrator->Repository Management-> Master Repository Import )for importing.
    Thanks,
    Guru.

  • Big number of files (physical + logical) opened by a SAP -related job

    Hello,
    We are implementing SAP ECC 6.0 on IBM System i (iSeries) 9406-550, i5/OS V5R4.
    SAP is implemented on sub-system R3_00, and the different SAP processes are served by OS/400 jobs : WP00, WP01, WP02,…etc.
    The noticeable thing is that since the very beginning of the working day, the number of open files (Physical + Logical) opened by one job (e.g. WP00) is extremely big (900 – 1000 files), and it remains big during the whole day; i.e. it varies by a small extent.
    The question is : is this a natural behavior of SAP, or is there something that we are missing that causes the un-necessary files to be closed after the relevant requirements are fulfilled.
    The problem with the huge number of files opened by a job is that the big number of file  ODPs exhausts the memory extensively, the thing which causes the memory faults to be high, thus impacting negatively the system performance.
    Thank you in advance for your cooperation.
    Best regards.

    Hi Reda,
    that is exactly working as designed ....
    Yes, the memory utilization is not too low because of that but on the other hand, it would be very expensive to close and open over and over again ...
    So, you have to deal with this somehow.
    The following is the default setting, that is active on your site as well: (You should be able to see these values in the dev_w0 etc ...)
    ODP Threshold:
    dbs/db4/odp_threshold     =          800
    ODP Commit Threshold:
    dbs/db4/odp_commit_threshold     =          810
    ODP Open Threshold:
    dbs/db4/odp_open_threshold     =          850
    (This is true even when you experience 900-1200 - this is just a "mis-adding" more or less ...)
    If you are really interested in: Try to change these parameters - but that is on your own risk, I do not have good experience with that ...
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

  • How does logical volume helps in performance in AIX..Should have posted IBM

    We are setting up a new DB server and the disks are in RAID5 config,Does putting data and index in different logical volumes helps in performance

    (I hope I'm not falling for April Fools joke here...)
    Hi Maran,
    As someone already answered, if both volumes are striped against all available disks, you can put everything in one volume and expect equal or better performance.
    However, I want to warn you from optimizing the disk structure without knowing that your database will really bottleneck on disk access to index and data blocks. My storage manager and I wasted countless hours with such optimizations before realizing that we are wasting our time because the application code contains so many functions that disk IO is not even close to being an issue.
    -- Chen

  • HT4059 I purchased an iBook and accidentally paused while downloading. Now I cannot download the rest. Message keeps telling to return to purchase site to download, but everything I click on "Download", the message would reappear. Circular logic. Help!

    I purchased A book on iBooks and accidentally paused it while downloading. Now I cannot continue it's downloading. I get a message saying I have already purchased the book and to go to Purchased page to download it. Trouble is, every time I do that, tapping the "Download" button, the same message reappears. EVERY TIME. Circular logic. Help!

    Okay, you need to contact support definitely about what's going on. Here is a form that will send an e-mail into support for you:
    http://www.apple.com/emea/support/itunes/contact.html
    As far as the prices, the difference is likely the laptop version is the SD version and the iPad one is available in HD. That's the only reason you'll see a price difference. It has nothing to do with the device with movies/movie rentals. Apps yes, these no.

  • Two physical logical source formulas for on logical column

    I have two fact tables :
    1. W_SERVICE_REQ_F(opened_dt_wid, assigned_dt_wid, closed_dt_wid, QUEUE_WID, SERVICE_REQ_WID): grain is one row per service request
    2. W_SERVICE_REQ_DAY_A(DATE_WID, QUEUE_WID, NUM_OPENED, NUM_CLOSED, num_assigned)
    The goal is to answer NUM_OPENED, NUM_CLOSED per queue in a day and be able to drill-down to those service requests.
    Physical model:
    Common_Date.row_wid = W_SERVICE_REQ_F (Opened_Dt).opened_dt_wid
    Common_Date.row_wid = W_SERVICE_REQ_F (Closed_Dt).closed_dt_wid
    Common_Date.row_wid = W_SERVICE_REQ_DAY_A.DATE_WID
    Queue_d.row_wid = W_SERVICE_REQ_F.QUEUE_WID
    Queue._drow_wid = W_SERVICE_REQ_DAY_A.QUEUE_WID
    Service_req_d.row_wid = W_SERVICE_REQ_F.SERVICE_REQ_WID. (there is no join between Queue_d and Service_req_d)
    BMM Fact and LTSs:
    I have W_SERVICE_REQ_DAY_A, W_SERVICE_REQ_F (Opened_Dt), W_SERVICE_REQ_F (Closed_Dt) as LTSs of "Fact - Service Request" in that order.
    BMM Fact logical columns (measures):
    # Closed
    # Opened
    Question:
    I want to configure each of the above logical column with two physical columns of different aggregate forumlas. (to ensure queries that dont need service req number hit a table) (similar to http://www.gerardnico.com/wiki/dat/obiee/fragmentationlevel_based).
    SUM(W_SERVICE_REQ_DAY_A.NUM_CLOSED)
    COUNT("W_SERVICE_REQ_F (Closed_Dt)".SERVICE_REQ_WID)
    Can someone help in configuring one logical column with two physical column sources with different aggregate formulas?
    This is what I tried:
    In # Opened Properites->Data Type, mapped it to both W_SERVICE_REQ_F (Closed_Dt).ROW_WID and W_SERVICE_REQ_DAY_A.NUM_CLOSED
    In # Opened Properties->Aggregation tab, selected 'Default aggregation rule' as Count Distinct. In the bottom, Logical table sources override as follows:
    W_SERVICE_REQ_DAY_A : SUM("Fact - Service Request"."# Closed")
    W_SERVICE_REQ_F : COUNT("Fact - Service Request"."# Closed")
    Should it be done in a different way?
    PS: OBIEE 10.1.3.4.
    added OBIEE version: Feb 3, 2012 4:40 PM

    I have two fact tables :
    1. W_SERVICE_REQ_F(opened_dt_wid, assigned_dt_wid, closed_dt_wid, QUEUE_WID, SERVICE_REQ_WID): grain is one row per service request
    2. W_SERVICE_REQ_DAY_A(DATE_WID, QUEUE_WID, NUM_OPENED, NUM_CLOSED, num_assigned)
    The goal is to answer NUM_OPENED, NUM_CLOSED per queue in a day and be able to drill-down to those service requests.
    Physical model:
    Common_Date.row_wid = W_SERVICE_REQ_F (Opened_Dt).opened_dt_wid
    Common_Date.row_wid = W_SERVICE_REQ_F (Closed_Dt).closed_dt_wid
    Common_Date.row_wid = W_SERVICE_REQ_DAY_A.DATE_WID
    Queue_d.row_wid = W_SERVICE_REQ_F.QUEUE_WID
    Queue._drow_wid = W_SERVICE_REQ_DAY_A.QUEUE_WID
    Service_req_d.row_wid = W_SERVICE_REQ_F.SERVICE_REQ_WID. (there is no join between Queue_d and Service_req_d)
    BMM Fact and LTSs:
    I have W_SERVICE_REQ_DAY_A, W_SERVICE_REQ_F (Opened_Dt), W_SERVICE_REQ_F (Closed_Dt) as LTSs of "Fact - Service Request" in that order.
    BMM Fact logical columns (measures):
    # Closed
    # Opened
    Question:
    I want to configure each of the above logical column with two physical columns of different aggregate forumlas. (to ensure queries that dont need service req number hit a table) (similar to http://www.gerardnico.com/wiki/dat/obiee/fragmentationlevel_based).
    SUM(W_SERVICE_REQ_DAY_A.NUM_CLOSED)
    COUNT("W_SERVICE_REQ_F (Closed_Dt)".SERVICE_REQ_WID)
    Can someone help in configuring one logical column with two physical column sources with different aggregate formulas?
    This is what I tried:
    In # Opened Properites->Data Type, mapped it to both W_SERVICE_REQ_F (Closed_Dt).ROW_WID and W_SERVICE_REQ_DAY_A.NUM_CLOSED
    In # Opened Properties->Aggregation tab, selected 'Default aggregation rule' as Count Distinct. In the bottom, Logical table sources override as follows:
    W_SERVICE_REQ_DAY_A : SUM("Fact - Service Request"."# Closed")
    W_SERVICE_REQ_F : COUNT("Fact - Service Request"."# Closed")
    Should it be done in a different way?
    PS: OBIEE 10.1.3.4.
    added OBIEE version: Feb 3, 2012 4:40 PM

  • Win 2003 Cluster + Oracle Fail Safe + Dataguard (physical & Logical)

    Hello,<br>
    <br>
    It´s my first post (sorry for my bad english)...I am mounting a high availability solution for test purpose. For the moment i mount the following and runs ok, but i´ve a little problem with the logical database:<br>
    <br>
    Configuration<br>
    ESX Server 2.0 with this machines:<br>
    Windows 2003 Cluster (Enterprise Edition R2, 2 nodes)<br>
    * NODE 1 - Oracle 10gR2 + Patch 9 + Oracle Fail Safe 3.3.4<br>
    * NODE 2 - Oracle 10gR2 + Patch 9 + Oracle Fail Safe 3.3.4<br>
    c:/Windows Software<br>
    e:/Oracle Software/ (pfile -> R:/spfile)<br>
    <br>
    Virtual SAN<br>
    * Datafile, Redos.. are in Virtual SAN.<br>
    R:/ Datafiles & Archivers & dump files & spfile<br>
    S:/ , T:/ ,U:/ -> Redos<br>
    V:/ Undo<br>
    <br>
    Data Guard<br>
    * NODE3 Physical Database<br>
    * NODE4 Logical Database<br>
    <br>
    The Oracle Fail Safe and windows cluster run OK, the switchs... <br>
    The physical database runs OK... (redo aply, switchover, failover, all ok) but the logical receives the redos ok but it has a problem when goes to apply the redo.<br>
    <br>
    The error is the following:<br>
    ORA-12801: error señalizado en el servidor P004 de consultas paralelas<br>
    ORA-06550: linea 1, columna 536:<br>
    PLS-00103: se ha encontrado el simbolo "," cuando se esperaba uno de los siguientes:<br>
    (- + case mod new not null <an identifier>
    <a double-quoted delimited-identifier><a bind variable><avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification><a number> > a single-quoted SQL string> pipe <an alternatively quoted string literal with character set specification> <an alternativel.<br>
    update "SYS"."JOB$" set "LAST_DATE"=TO_DATE('11/09/07','DD/MM/RR'),<br>
    <br>
    This sql statement i saw in dba_logstdby_events and was joined with the error in alert log and dba_logstdby_events.<br>
    <br>
    I´m a bit lost with this error. I don´t understand why the logical database can´t start to apply the redos received from primary database.<br>
    <br>
    The database has two tables with two columns one integer and the other a varchar2(25). She hasn´t rare types of columns.<br>
    <br>
    Thanks a lot for any help,<br>
    Roberto Marotta<br>

    I recreate the logical database OK, no problem, no errors.<br>
    <br>
    The redo aply run ok. I have done logfile switch in primary database and they were applied in logical and standby databases. But...<br>
    <br>
    When I created a tablespace in primary database when i did a switch logfile in primary the changes transfers ok to standby database, but to logical NO!!!, the redo are in they path in logical ok, but when the process tried to apply, reports me the same error.<br>
    <br>
    SQL> select sequence#, first_time, next_time, dict_begin, dict_end, applied from dba_logstdby_log order by 1;<BR>
    <BR>
    SEQUENCE# FIRST_TI NEXT_TIM DIC DIC APPLIED<BR>
    --------- -------- -------- --- --- -------<BR>
    138 14/09/07 14/09/07 NO NO CURRENT<BR>
    139 14/09/07 14/09/07 NO NO CURRENT<BR>
    <br>
    SQL> select event_time, status, event from dba_logstdby_events order by event_time, timestamp, commit_scn;<br>
    <br>
    14/09/07<br>
    ORA-16222: reintento automatico de la base de datos logica en espera de la ultima accion<br>
    14/09/07<br>
    ORA-16111: extraccion de log y configuracion de aplicacion<br>
    14/09/07<br>
    ORA-06550: linea 1, columna 536:<br>
    PLS-00103: Se ha encontrado el simbolo "," cuando se esperaba uno de los siguientes:<br>
    ( - + case mod new not null <an identifier><br>
    <a double-quoted delimited-identifier> <a bind variable> avg<br>
    count current exists max min prior sql stddev sum variance<br>
    execute forall merge time tiemstamp interval date<br>
    <a string literal with character set specification><br>
    <a number><a single-quoted SQL string> pipe
    <an alternatively-quoted string literal with charactert set specificastion><br>
    <an alternativel<br>
    update "SYS"."JOB$" set "LAST_NAME" = TO_DATE('14/09/07','DD/MM/RR'),<br>
    <br>
    The alert.log report the same message that the dba_logstdby_events view.<br>
    <br>
    Any idea¿?<br>
    <br>
    I´m a bit frustrated. It´s the third time that recreate the logical database OK and reproduce the same error when i create a tablespace in primary database, and i haven´t got any idea because of that.

  • Physical/Logical Switch Over

    Hi Champions,
    I just want to know what is the diffrence in logical and physical standby switch Over.
    DGMGRL's process how internal process works.
    any suggesion is appriciated.
    Regards,
    Shitesh Shukla

    Here you go....
    http://ayyudba.blogspot.com/2007/10/performing-switchover-in-data-guard.html
    Hope this helps,
    Regards
    Duplicate RAC Database using RMAN
    http://www.oracleracexpert.com/2009/12/duplicate-rac-database-using-rman.html
    Send EMAIL using UTL_MAIL in Oracle
    http://www.oracleracexpert.com/2009/11/send-email-using-utlmail-in-oracle-10g.html

  • How to write code for this logic, plz help me very urgent

    Hi All,
    i am new to sap-abap, i got this work and i m working on this can any body help me in writing code, plz help me, this is very very urgent.
    here  i m giving my logic, can anybody send me the code related to this logic.
    this is very urgent .
    this program o/p should be in ALV format and need to create one commond 'SAVE" on this o/t list  if  user clicks save processedon and processedby fields in ZFIBUE should be updated automatically.
    i am creating one custom table zfibue having fields: (serialno, bukrs, matnr,prdha,hkont,gsber,wrbtr,budat, credate, cretime,processed, processedon, processedby,mapped)
    fields of zfibue:
    serailno = numc
    bukrs = char
    matnr = char
    prdha = char
    hkont = char
    gsber = char
    wrbtr = char
    budat = date
    credate = date
    cretime = time
    processed= char
    processedon = date
    processedby = char
    mapped = char      are   belongs to above type data types
    and seelct-optionfields:  s_bukrs for bseg-bukrs
                                        s_hkont for bseg-hkont,
                                         s_budat for bkpf-budat,
                                         s_processed for zfibue-processed,
                                          s_processedon for zfibue-processedon,
                                          s_mapped. for zfibue-mapped
    parameters: p_chk1 as checkbox,
                      p_chk2 as checkbox.
                      p_filepath type rlgrap-filename.
    1.1 Validate the user inputs (S_BUKRS and S_HKONT) against respective check tables (T001 and SKB1). If the validation fails, provide respective error message. Eg: “Invalid input for Company Code”.
    1.2 Fetch SERIALNO, BUKRS, MATNR, PRDHA, HKONT, GSBER, WRBTR, BUDAT, CREDATE, CRETIME, PROCESSED, PROCESSEDON, PROCESSEDBY, MAPPED from table ZFIBUE into internal table GT_ZFIBUE where BUKRS IN S_BUKRS, HKONT IN S_HKONT, BUDAT IN S_BUDAT, PROCESSED IN S_PROCESSED, PROCESSEDON IN S_PROCESSEDON, and MAPPED IN S_MAPPED.
    1.3 If P_CHK2 = ‘X’, go to step 1.11. Else continue.
    1.4 If P_CHK1 = ‘X’, continue. Else go to step 1.9
    1.5 Fetch MATNR, PRDHA from MARA into GT_MARA for all entries in GT_ZFIBUE where MATNR = GT_ZFIBUE-MATNR.
    1.6 Sort and delete adjacent duplicates from GT_MARA based on MATNR.
    1.7 Loop through GT_ZFIBUE where PRDHA = blank.
              Read Table GT_MARA based on MATNR = GT_ZFIBUE-MATNR.
              IF sy-subrc = 0.
                     Move GT_MARA-PRDHA to GT_ZFIBUE-PRDHA.
                  Modify Table GT_ZFIBUE. “Update Product Hierarchy
                 Endif.
        Fetch PRDHA, GSBER from ZFIBU into GT_ZFIBU for all entries in GT_ZFIBUE where PRDHA = GT_ZFIBUE-PRDHA.
        Read Table GT_ZFIBU based on PRDHA = GT_ZFIBUE-PRDHA.
              IF sy-subrc = 0.
                     Move GT_ZFIBU-GSBER to GT_ZFIBUE-GSBER.
                  Move “X” to GT_ZFIBUE-MAPPED.      
                  Modify Table GT_ZFIBUE.
                 Endif.   
    Endloop.
    1.8 Modify database table ZFIBUE from GT_ZFIBUE.
    1.9 Fill the field catalog table GT_FIELDCAT using the details of output fields listed in section “Inputs/Outputs” (above).
       Eg:                 LWA_ FIELDCAT -SELTEXT_L = 'Serial Number’.
                              LWA_ FIELDCAT -DATATYPE = ‘NUMC’.
                              LWA_ FIELDCAT -OUTPUTLEN = 9.
                              LWA_ FIELDCAT -TABNAME = 'GT_ZFIBUE'.
                              LWA_ FIELDCAT-FIELDNAME = 'SERIALNO'.
              Append LWA_FIELDCAT to GT_FIELDCAT
    Note: a) The output field GT_ZFIBUE-PROCESSED will be editable marking INPUT = “X” in field catalog (GT_FIELDCAT).
             b) The standard ALV functionality will be used to give the user option for selecting all or blocks of entries at a time.
             c) The PF-STATUS STANDARD_FULLSCREEN from function group SLVC_FULLSCREEN will be copied to the program and modified to include a “SAVE” button.
    1.10 Call the function module REUSE_ALV_GRID_DISPLAY passing output table GT_ZFIBUE and field catalog GT_FIELDCAT. Additional parameters like I_CALLBACK_PF_STATUS_SET (= ‘ZFIBUESTAT’) and I_CALLBACK_USER_COMMAND (=’HANDLE_USER_ACTION’) will also be passed to handle user events. Go to 2.14.
    1.11 Download the file to P_FILEPATH using function module GUI_DOWNLOAD passing GT_ZFIBUE.
    1.12 Exit Program.
    Logic to be implemented in  routine “Handle_User_Action”
    This routine will have the following interface:
    FORM Handle_User_Action  USING r_ucomm LIKE sy-ucomm
                                                               rs_selfield TYPE slis_selfield.
    ENDFORM.
    Following logic will be implemented in this routine:
    1.     If r_ucomm = ‘SAVE’, continue. Else exit.
    2.     Loop through GT_ZFIBUE where SEL_ROW = ‘X’. “Row is selected
    a.     IF GT_ZFIBUE-PROCESSED = ‘X’.
    i.     GT_ZFIBUE-PROCESSEDON = SY-DATUM.
    ii.     GT_ZFIBUE-PROCESSEDBY = SY-UNAME.
    iii.     MODIFY ZFIBUE FROM work area GT_ZFIBUE.
    Endif.
    Endloop.

    Hi Swathi,
    If it's very very urgent then you better get on with it, don't waste time on the web. Chop chop.

  • Configuration of the Physical & Logical standby servers on the same machine

    I've encoutered a problem to connect to one of the standby servers .
    I created a DataGuard in 10.2. Primary on 1 machine, 1 physical standby and 1 logical standby on another machine (both phy and logical in same machine)
    I first created a primary then a physical standby. I've got no problem at all.
    But when I created a logical standby server on the same machine with the physical then I could not connect to both phy and logical . I can connect to either one of them.
    According to DataGuard setup, the db_name must be the same in primary and all standby servers. This is fine when each of the primary or the standby installed in a separate machine.
    The db_name of the physical = rolex
    The db_name of the logical = rolex
    When we have more than one instance in the same machine, we just set the ORACLE_SID = db_name ( or instance), then connect to it.
    If I connect to the physical, then I try to connect to the logical, it goes to the physical or vice versus.
    Does anyone have the solution ?
    QN

    DB_UNIQUE_NAME parameter will be the seperator.
    give DB_UNIQUE_NAME parameters different names and set ORACLE_SID with DB_UNIQUE_NAME name you gave
    DB_NAME is the primary instances database name DB_UNIQUE_NAME is given for standby instances database name

  • Logical Debugging Help

    Help!
    I've written an application that isn't working exactly like I think it should. The program is supposed to compare two files. If there is a string in file one that is not in file two, then it is supposed to write that string to a third file. To help clarify, the first file is a final list of students that are taking the class. The second file is a list of students that attended for that session (they will be registering electronically). Therefore, I need the program to exam the two lists and tell me who was absent. At the moment, it will work for the first absent person (ie. it writes the student information to the third file so I know they are absent) but it seems to just stop working after that. I'm sure there is a problem with the logic...but I can't figure out where. Below is the code that i'm using. I'd appreciate any assistance!
    Thanks!
    John
    public void actionTaken(){
    try {
    //File with final registration list
    classList = new BufferedReader(new FileReader("classList.txt"));
    //File that has recorded attendance
    studentInput = new BufferedReader(new FileReader("studentInput.txt"));
    // String written to this file if they were absent.
    absent = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("Absent.txt", true)));
    // First while loop examines every line that is in the classList.txt file. Should advance to the next line after the nested while loop executes.
    while ((cLT = classList.readLine()) != null){
    // Nested while loop examines every line in the studentInput.txt
    while ((sIT = studentInput.readLine()) != null){     
    //If statement checks to see if a student has logged in.
    if (cLT.equals(sIT)){
    countTemp++; //countTemp is tripped if the student has logged in.
    studentInput.close(); //Closes file after the entire file has been examined.     
    //If statement checks the countTemp to see if the student registered.
    if (countTemp == 0){ //Writes to the absent file if this line is true.
    absent.newLine();
    absent.write(cLT);
    absent.close();
    countTemp = 0; //reinitialized to test next student name.
    classList.close(); //Closes file after the entire file has been examined.
    } catch (IOException x) {

    You close the student attendance file (studentInput) after you read the first student name from the class list!!!
    Unless the files are going to get really big, I wouldn't do it this way. I'd create a Map of students who attended. Then I'd go through the names list and see if the names are present in the map. That way you could re-use the map for every student name.
    If you don't want to do it that way, you're going to have to make assumptions about the ordering of the two name lists, and whether one is necessarily a subset of the other.

  • Soundtrack Pro making a LOT of files - used to working in Logic, need help

    So I'm used to working with Logic Studio/7.2/etc, and it's my first foray into working with Soundtrack Pro at work. My questions are this: When I use the razorblade tool (since I can't find an equivalent to the split tool in logic) in soundtrack to cut up a sequence, it seems to be creating a LOT of extra files:
    http://www.1217design.com/pics/stp_question.png is an example of what I mean. Is this supposed to happen? The recorded files themselves were done in FCP and sent to Soundtrack Pro via the Send to STP Multitrack session function.
    What I'm trying to find out is: Is this supposed to be happening? How can I stop this from happening? Are all of these extra files necessary in order to export the final project? And what will happen if I delete them?
    I just want to know because the folder is now 4.71gb for a 30 second audio file (the final aif export is roughly 30mb), and there's over 200 of these extra files that have been created in the process of working on this project.
    If this is what is necessary to work in Final Cut then I won't be able to work with FCP due to size issues (which is a shame as I feel so much more comfortable working within that environment than I do working in FCP).
    Thanks for the help in advance,
    Sean

    Sean A wrote:
    By doing this, will it still create all of those extra files? That's what I'm trying to figure out. Is it just from the blade tool being used?
    For all edits, yeah, baby, STP will generate more files than cows create methane.
    First, I know what works for me, but I'm still learning, so I strongly encourage you to keep a COMPLETE BACKUP of all projects until you know your workflow. Also, you may have Preferences set up differently than I do, so you may not find the same paths as what I describe here.
    When you have an audio file the way you want it, and you're confident that you won't need those edit files again (be SURE) Process > Flatten All Actions (or Audible Actions) and Save that audio file. Then -- as I understand it so far -- you can safely trash all the edits related to that file. (If you use Photoshop or similar visual programs, it's the same idea.)
    During my first few weeks of intensive STP work, my drive grew more and more sluggish, for no reasons known to me. I saw disk space shrinking rapidly. Finally I figured it had to have something to do with STP, so I started searching and eventually I found my edits piling up in home/Documents/Soundtrack Pro Documents, especially (for me) in Edited Media. Since they all came from projects I'd finished and exported to AAC and MP3, I was comfortable trashing them all. Suddenly, whoa, I had an extra 12 GB of disk space available.
    Just keep those edit files until you are CERTAIN that you know what you're doing and that you're DONE. Otherwise, when some project needs one precious little 3MB turn of a phrase, you may find yourself confronting suicidal or homicidal tendencies.
    I welcome further clarification and/or correction from people who know more and can explain it better.
    Best,
    chuck

  • Logic Enironment HELP!!! External Midi NOT SYNCING UP with Ensoniq ASR-X.

    I need help setting up an environment for my Ensoniq ASR-X. As it Stands, I have been able to get my Axium 49 to trigger sounds in the ASR-x and I have also been able to get Logic to start the ASR's sequencer but for some reason I get latency and a phase effect from the ASR.
    All I want to do is toggle between making the ASR-X a slave so I can record tracks that are made on the machine into logic or make Logic the SLAVE so I can press the play button on the ASR and record my audio and midi at the same time into Logic 8.
    If that's too much, Then please just help me to get my machine into an environment that will work inside logic.
    Thanx!
    P.S.
    Amy advice on how to set up the midi routing between my Axium 49, M-Audio Fast Track Pro USB interface and ASR-X to work together more smoothly would be much appreciated.

    Hi. Thanks for taking the time to reply. I did try that and it didn't solve the problem.
    What I did last night was to trash all my Logic Express files and the Logic pro install files as per the advice on the Logic Pro Troubleshooting Basics page. Then I reinstalled and updated Logic Pro.
    That seemed to fix the problem. It may have been that having a 7.1.1 version of Logic Express and a 7.2 version of Logic Pro on the same machine was causing a confusion over preferences (if that makes sense; I'm not a computer expert). At any rate, I have my insert slots back now and I'm happy.
    Thanks again for taking the time to respond.
    iMac G5 1.8 gHz   Mac OS X (10.4.8)   80g hard drive, Matshita DVD-R UJ-825, 1 GB RAM

Maybe you are looking for

  • Time Machine on Snow Leopard Server

    I would like to setup a backup service using Time Machine of a Mac Pro Snow Leopard Server (10.6.5) to an external hard drive connected with USB. Are there any gotcha's with this type of setup that I need to be aware of? My faint memory recalls an is

  • How to get a cell to always assume the date is in 2011?

    So that in data entry, I only have to enter the 5/1 and save myself from writing 2011. So far, if I do that, it assumes the year is 2012. Where can I turn that assuming function off? cheers

  • Editing of Rejected Purchase Order

    Hi all, I have 2 inquiry with regards to rejected purchase order (status "Release Indicator" in transaction code "me22n"-change purchase order). 1. What is the standard operating procedures (suggested by SAP) with regards to editing of rejected PO? 2

  • I can't delete my synced photos!

    I accidentally synced my computer photos to my phone and when I tried to delete them, while following apple care instructions, I got stuck when I wasn't able to press the selected folders button on my apple laptop while on iTunes, Please help

  • Confirmation controll key

    Hello I have a problem concerning the confirmation controll key in Me21n. This is picked up from the vendor, and therfore causing me som trouble. We have a retail solution, were we have the same vendor for both the store and the warehouse (DC) the pr