Problem w/ PHP tutorial ... duplicate data

Hi --
I have got Oracle XE installed okay and it's running fine.
Now I am going thru the PHP tutorial: http://download-uk.oracle.com/docs/cd/B25329_01/doc/appdev.102/b25317/toc.htm
In Chapter 3 (http://download-uk.oracle.com/docs/cd/B25329_01/doc/appdev.102/b25317/ch3.htm#sthref172), I copied the PHP files to my machine and they are running fine except that the "SELECT * FROM DEPARTMENTS" statement produces duplicate data for each column in the table, e.g.:
10,10,Administration,Administration,200,200,1700,1700
20,20,Marketing,Marketing,201,201,1800,1800
30,30,Purchasing,Purchasing,114,114,1700,1700
Any idea why the "oci_fetch_array($stid, OCI_RETURN_NULLS)" statement would return two copies of each of the columns?
When I look at the same table using SQLplus, there are no problems.
Thanks!
Gary

Here is the PHP code that is causing the problem (copied exactly from tutorial):
<?php // File: anyco.php
require('anyco_ui.inc');
// Create a database connection
$conn = oci_connect('hr', 'hr', '//localhost/XE');
ui_print_header('Departments');
do_query($conn, 'SELECT * FROM DEPARTMENTS');
ui_print_footer(date('Y-m-d H:i:s'));
// Execute query and display results
function do_query($conn, $query)
$stid = oci_parse($conn, $query);
$r = oci_execute($stid, OCI_DEFAULT);
print '<table border="1">';
while ($row = oci_fetch_array($stid, OCI_RETURN_NULLS)) {
print '<tr>';
foreach ($row as $item) {
print '<td>'.
($item ? htmlentities($item) : ' ').'</td>';
print '</tr>';
print '</table>';
?>

Similar Messages

  • Http service component working php and sql data

    hello i'm using that componet. i have trouble. now i want in my home page
    i have 2 title from my localhost/phpmyadin. and i want when click title 1 it will link to a detail.mxml  and display content of title 1
    and when click title 2 it will display content of title 2 in detail.xml
    i don't know transmission parameters . Can u help me please. I'm need it

    These links are good for showing how to use Flex + PHP + HTTPService + MySQL:
    http://www.switchonthecode.com/tutorials/using-flex-php-and-json-to-modify-a-mysql-databas e
    If this post answers your question or helps, please mark it as such.
    http://www.switchonthecode.com/tutorials/flex-php-tutorial-transmitting-data-using-json
    http://www.switchonthecode.com/tutorials/flex-php-json-mysql-advanced-updating
    Greg Lafrance - Flex 2 and 3 ACE certified
    www.ChikaraDev.com
    Flex Training and Support Services

  • Sender JDBC adapter: duplicate data sent

    Hi experts,
    The scenerio is XI is polling the Data from source system using JDBC  sender adapter. But the duplicate data is being sent to target system. I have check the chaneel configuaration and it is as follows:
    Polling Intervals(secs) = 3600
    and schedulling for the channel in RWB is set as: Daily at ......AM for 2 hours
    as per my understanding the here the adpater is polling the data for one hour and again it starts polling for another one hour which is set in secheduling.
    If I set the Availability planing as Daily at ......AM for 1 hour then problem can be resolved.
    Please advise.
    thanks,
    Regards,
    Mohan

    This could be resolved if you can add a field in the DB table like 'ReadFlag'.And your sql statement should have a check on this field.
    You can ask someone(who can actually make changes in the Database) to add the field in the Table.
    Please refer this blog for more understanding
    /people/yining.mao/blog/2006/09/13/tips-and-tutorial-for-sender-jdbc-adapter

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • DTP Error: Duplicate data record detected

    Hi experts,
    I have a problem with loading data from DataSource to standart DSO.
    In DS there are master data attr. which have a key  containing id_field.
    In End routine I make some operations which multiple lines in result package and fill new date field - defined in DSO ( and also in result_package definition )
    I.E.
    Result_package before End routine:
    __ Id_field ____ attra1 ____  attr_b  ...___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1         
       ____2________ a2______ b2_________ x2       
    Result_package after End routine:
    __ Id_field ____ attra1 ____  attr_b  ..___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1______d1         
       ____2________ a1______ b1_________ x1______d2    
       ____3________ a2______ b2_________ x2______d1         
       ____4________ a2______ b2_________ x2______d2   
    The  date_field (date type)  is in a key fields in DSO
    When I execute DTP I have an error in section Update to DataStore Object: "Duplicate data record detected "
    "During loading, there was a key violation. You tried to save more than one data record with the same semantic key."
    As I know the result_package key contains all fields except fields type i, p, f.
    In simulate mode (debuging) everything is correct and the status is green.
    In DSO I have uncheched checkbox "Unique Data Records"
    Any ideas?
    Thanks in advance.
    MG

    Hi,
          In the end routine, try giving
    DELETE ADJACENT DUPLICATES FROM RESULT_PACKAGE COMPARING  XXX  YYY.
    Here XXX and YYY are keys so that you can eliminate the extra duplicate record.
    Or you can even try giving
        SORT itab_XXX BY field1 field2  field3 ASCENDING.
        DELETE ADJACENT DUPLICATES FROM itab_XXX COMPARING field1 field2  field3.
    this can be given before you loop your internal table (in case you are using internal table and loops)  itab_xxx is the internal table.
    field1, field2 and field 3 may vary depending on your requirement.
    By using the above lines, you can get rid of duplicates coming through the end routine.
    Regards
    Sunil
    Edited by: Sunny84 on Aug 7, 2009 1:13 PM

  • How to avoid 'duplicate data record' error message when loading master data

    Dear Experts
    We have a custom extractor on table CSKS called ZCOSTCENTER_ATTR. The settings of this datasource are the same as the settings of 0COSTCENTER_ATTR. The problem is that when loading to BW it seems that validity (DATEFROM and DATETO) is not taken into account. If there is a cost center with several entries having different validity, I get this duplicate data record error. There is no error when loading 0COSTCENTER_ATTR.
    Enhancing 0COSTCENTER_ATTR to have one datasource instead of two is not an option.
    I know that you can set ignore duplicates in the infopackage, but that is not a nice solution. 0COSTCENTER_ATTR can run without this!
    Is there a trick you know to tell the system that the date fields are also part of the key??
    Thank you for your help
    Peter

    Alessandro - ZCOSTCENTER_ATTR is loading 0COSTCENTER, just like 0COSTCENTER_ATTR.
    Siggi - I don't have the error message described in the note.
    "There are duplicates of the data record 2 & with the key 'NO010000122077 &' for characteristic 0COSTCENTER &."
    In PSA the records are marked red with the same message (MSG no 191).
    As you see the key does not contain the date when the record is valid. How do I add it? How is it working for 0COSTCENTER_ATTR with the same records? Is it done on the R/3 or on the BW side?
    Thanks
    Peter

  • How to avoid duplicate data loading from SAP-r/3 to BI

    Hi !
           I have created one process chain that will load data into some ODS from R/3,where(in R/3)the datasources/tables r updated daily.
           I want to scheduled the system such that ,if on any day the source data is not updated (if the tables r as it is) then that data shuold not be loaded into ODS.
           Can any one suggest me such mechanism,so that I can always have unique data in my data targets.
           Pls ! Reply soon.
          Thank You !
           Pankaj K.

    Hello Pankaj,
    By setting the unique records option, you pretty much are letting the system know to not check the uniqueness of the records using the change log and the ODS active table log.
    Also, in order to avoid the problem where you are having dual requests which are getting activated at the same time. Please make sure you select the options "Set Quality Status to 'OK' Automatically" and "Activate Data Automatically" that way you would be having an option to delete a request as required without having to delete the whole data.
    This is all to avoid the issue where even the new request has to be deleted to delete the duplicate data.
    Untill and unless the timestamp field is available in the table on top of which you have created the datasource it would be difficult to check the delta load.
    Check the table used to make sure there is no timestamp field or any other numeric counter field which can be used for creating a delta queue for the datasource you are dealing with.
    Let me know if the information is helpful or if you need additional information regarding the same.
    Thanks
    Dharma.

  • Problem in delete adjecent duplicates

    Hi All,
    i have a problem in delete adjacent duplicate in an internal table .
    when  i use it i want that the records which are getting doubled to be removed
    but i also want that if the value in that field is empty for which i am comparing , to be ignored means i dont want the records
    to be deleted in which the field value is empty for which i am comparing.
    snippet of my code
    delete ADJACENT DUPLICATES FROM xkomv COMPARING KSCHL.
    so if theere is no value in KSCHL THAT RECORD NOT TO BE DELETED .
    waiting for your reply..
    Thanks in advance

    Try the following,
    "delete ADJACENT DUPLICATES FROM xkomv COMPARING KSCHL.
    Before using the above statement do the following,
    u201C decalare a temporary table for u2018xkomvu2019 say u2018t_xkomvu2019, and copy the data.
    Refresh t_xkomv[].
    t_xkomv[] = xkomv[].
    Delete t_xkomv where KSCHL <> u2018  u2018.
    Delete xkomv where KSCHL =  u2018  u2018.
    u201C Sort xkomv and then use delete adjacent duplicates.
    And after this append data from t_xkomv to xkomv.
    Hope it helps you,
    Regards,
    Abhijit G. Borkar

  • Problem with update form and date (show 1970-01-01)

    Hi, I've a update form (php/mysql) with many date input field. When my date is 000-00-00 I show 1970-01-01. Why??
    This is the code:
    label for="data_chiusura"><strong>Data chiusura</strong></label>
          <input type="text" name="Data_chiusura" value="<?php echo $string=$row_Recordset1['data_chiusura'];
        if($string == '0000-00-00'){
        $string = '';
        } else {
        $string = date("d-m-Y", strtotime($string));
        }; ?>" id="Data_chiusura">
    Thanks
    k

    Why would the date ever be null? As long as the date has a non-zero, non-null value this function will work correctly.
    I'm a little confused by what you are doing here. So, you are pulling data from a table, and populating forms with it. This particular field is a date field, and the problem is that when the data for that field is a null, you are getting instead a date of 1970-01-01 displayed in that field - is that correct? What do you want to appear there when the data is null? Nothing?

  • Check duplicate data entry in multi record block,which is a mandatory field

    Dear all,
    I have a situation where i have to check duplicate data entry(on a particular field,which is a mandatory field,i.e. it cannot be skipped by user without entering value) while data key-in in a Multi Record block.
    As for reference I have used a logic,such as
    1>In a When-Validate-Record trigger of that block I am assigning the value of that current item in Table type variable(collection type)
    as this trigger fire every time as soon as i leave that record,so its assigning the value of that current time.And this process continues
    then
    2>In a When-Validate-Item trigger of that corresponding item(i.e. the trigger is at item level) has been written,where it compares the value of that current item and the value stored in Table type variable(collection type) of When-Validate-Record trigger.If the current item value is matched with any value stored in Table type variable I am showing a message of ('Duplicate Record') following by raise_form_trigger failure
    This code is working fine for checking duplicate value of that multi record field
    The problem is that,if user enter the value in that field,and then goes to next field,enter value to that field and then press 'Enter Query 'icon,the bolth Validate trigger fires.As result first when-validate record fires,which stores that value,and then when-validate-item fires,as a result it shows duplicate record message
    Please give me a meaningful logic or code for solving this problem
    Any other logic to solve this problem is also welcome

    @Ammad Ahmed
    first of all thanks .your logic worked,but still i have some little bit of problem,
    now the requirement is a master detail form where both master and detail is multirecord ,where detail cannot have duplicate record,
    such as..........
    MASTER:--
    A code
    A1
    A2
    DETAIL:--
    D code
    d1
    d2 <-valid as for master A1 , detail d1 ,d2 are not duplicate
    d2 <--invalid as as for master A1 , detail d2 ,d2 are duplicate
    validation rule:  A Code –D Code combination is unique. The system will stop users from entering duplicate D Code for a A Code. Appropriate error message will be displayed.*
    actually i am facing a typical problem,the same logic i have been applied in detail section ,its working fine when i am inserting new records.problem starts when i query,after query in ' a ' field say 2 records (i.e. which has been earlier saved) has been pasted,now if i insert a new record with the value exactly same with the already present value in the screen(i.e. value populated after query) its not showing duplicate.................could u tell me the reason?and help me out...............its urgent plzzzzzzzzz
    Edited by: sushovan on Nov 22, 2010 4:34 AM
    Edited by: sushovan on Nov 22, 2010 4:36 AM
    Edited by: sushovan on Nov 22, 2010 8:58 AM

  • Problem displaying php page in dreamweaver

    I am having problems displaying php scripting on dreamweaver.
    Need your advice.
    Installed dreamweaver 8, Coldfusion 7, mysql, php5.2 (using
    windows installer).
    Created file test.php ror testing containing
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
    Transitional//EN" "
    http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="
    http://www.w3.org/1999/xhtml">
    <head>
    <meta http-equiv="Content-Type" content="text/html;
    charset=iso-8859-1" />
    <title>Untitled Document</title>
    </head>
    <body>
    date is:<b><?php echo Hello ?></b>
    </body>
    </html>

    ulises_arsi wrote:
    > I am having problems displaying php scripting on
    dreamweaver.
    Tell us what the problems are.
    > Installed dreamweaver 8, Coldfusion 7, mysql, php5.2
    (using windows installer).
    PHP needs to be configured with a web server, such as Apache
    or IIS.
    ColdFusion is also a webserver, but as far as I know, it
    cannot be
    configured to serve PHP pages.
    > date is:
    <?php echo Hello ?>
    The only thing that would display is an error message. Hello
    needs to be
    enclosed in quotes:
    <?php echo 'Hello'; ?>
    David Powers
    Adobe Community Expert
    Author, "Foundation PHP for Dreamweaver 8" (friends of ED)
    http://foundationphp.com/

  • Check Duplicate data during data key-in Multi Record Block

    Dear all,
    I have a situation where i have to check duplicate data entry(on a particular field,which is a mandatory field,i.e. it cannot be skipped by user without entering value) while data key-in in a Multi Record block.
    As for reference I have used a logic,such as
    1>In a When-Validate-Record trigger of that block I am assigning the value of that current item in Table type variable(collection type)
    as this trigger fire every time as soon as i leave that record,so its assigning the value of that current time.And this process continues
    then
    2>In a When-Validate-Item trigger of that corresponding item(i.e. the trigger is at item level) has been written,where it compares the value of that current item and the value stored in Table type variable(collection type) of When-Validate-Record trigger.If the current item value is matched with any value stored in Table type variable I am showing a message of ('Duplicate Record') following by raise_form_trigger failure
    This code is working fine for checking duplicate value of that multi record field
    The problem here is that suppose if usee gets a message of ('Duplicate Record') and after that without saving the values if user try to query of that block then also when validate item fired where as I am expecting ORACLE default alert message('Do You want to save?'),I want to restrict this When-Validate Item fire during query time..........................while user try to query.
    Please give me a meaningful logic or code for solving this problem
    Any other logic to solve this problem is also welcome

    When-Validate-Record trigger
    When-Validate-Item triggerThat smells like Oracle Forms...
    And the Oracle Forms forum is over here: Forms

  • Report Builder showing duplicate data

    Hi everyone!
    When adding sub reports to my table, it now duplicates/repeats the data many times.
    why is this happening and how do I stop it?
    Thanks :)
    Allana

    Hi Allana,
    When I directly insert a subreport without any parameters to a details row cell, I can easily reproduce this issue in my environment. Generally, if we want to avoid this issue, we must design a parameterized report (for example, a report that shows the details
    for a specific customer) as the subreport. For more details, please refer to the following steps:
    Create a parameter, then add a filter based on the parameter to filter the data in the subreport.
    In the main report, insert the subreport with the corresponding parameter values.
    Then we can filter the subreport based on the parameter values to eliminate the duplicate data. Besides, we can also add a group in the main report to avoid the duplicate data in main report.
    References:
    Tutorial: Adding Parameters to a Report (SSRS)
    Add a Subreport and Parameters (Report Builder and SSRS)
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Insert data into table 1 but remove the duplicate data

    hello friends,
    i m trying to insert data into table tab0 using hints,
    query is like this..
    INSERT INTO /*+ APPEND PARALLEL(tab0) */ tab NOLOGGING
    (select /*+ parallel(tab1)*/
    colu1,col2
    from tab1 a
    where a.rowid =(select max (b.rowid) from tab2 b))
    but this query takes too much time around 5 hrs...
    bz data almost 40-50 lacs.
    i m using
    a.rowid =(select max (b.rowid) from tab2 b))....
    this is for remove the duplicate data..
    but it takes too much time..
    so please can u suggest me any ohter option to remove the duplicate data so it
    resolved the optimization problem.
    thanks in advance.

    In the code you posted, you're inserting two columns into the destination table. Are you saying that you are allowed to have duplicates in those two columns but you need to filter out duplicates based on additional columns that are not being inserted?
    If you've traced the session, please post your tkprof results.
    What does "table makes bulky" mean? You understand that the APPEND hint is forcing the insert to happen above the high water mark of the table, right? And you understand that this prevents the insert from reusing space that has been freed up because of deleted in the table? And that this can substantially increase the cost of full scans on the table. Did you benchmark the INSERT without the APPEND hint?
    Justin

  • How to bind dynamic row data to submit it by HTTP submit (PHP) - addInstance and Data Binding

    Hi,
    i have got a problem with submit HTTP all data (variables) from PDF to submit.php.
    I have got table with dynamic add/remove Table Row button. When i add it their names are Table.Row[0] , Table.Row[1], Table.Row[2] etc. Only Table.Row is real, and every other row is create dynamically by addInstance script command.
    When i fill "Data Binding" box like this: "Use name(Row)" then after submiting it do PHP i will see only last Table.Row data. For example if last would be Table.Row[3] then i will see only this on my submit.php and the others will be replaced by this value. This is happen becouse of replacing value by value with the same name (data binding is seeing only one Table.Row without instance name:"[1]", "[2]", "[3]" etc.).
    I guess that if i would change something to get addInstance command with Row names like Row1, Row2, Row3 then all will be ok.
    Another way it`s to change something in the "Data Binding" box (Object > Binding Tab) to get relative name like Row[*] instead of "Use name(Row)".
    I don`t know how to solve it and i need your help

    Create a binding for your dataTable.
    In the binding create a UIData element with gettters and setters.
    You can manipulate rows and columns from it.

Maybe you are looking for

  • I want a mobile for adventures!!!!

     I have a Nokia 5140i, I've always loved the phone, and I've had it with me on my adventures in South Africa, New Zealand and Australia. But... I've had it for 4-5 years now, and it's showing signs of it being soon time to be exchanged. So, I'm out l

  • Per my last post - potentially fatal mistake

    I think my software guy gave me the wrong software, I told him I need Server 2003 so I can run CF Standard. I seem to have Small Business Version, but I just checked, and I never realized Adobe recommends the Web edition. I can't get anything to hook

  • Upgrading a wlc 5508 from 7.0.116 to 7.4

    Hi I have a wlc 5508 running version 7.0.116.0 that I need to uppgrade to use the CAP2602I AP. I understand that I need to upgrade it to version 7.0.240 before 7.4.100 to avoid loosing HREAP VLAN mappings, and I have also read that i need to install

  • Using arrays

    Hello I wanted to know if using the array method would be the most suitabe for creating a portfolio feature where clicking on thumbnail images would load outside swf's. thanks alot for helping mt

  • Lost iphone 5 find my phone

    I lost my phone and had set up find my phone on it but at the time I lost the data was off and I set up lock code, after I lost I turn on the lost mode, will someone be able to reset the device? will it show on my find my phone if in the future it wi