Prevent duplicate data

I would like to make sure there are no duplicate data entries in my JSP that populates an Oracle database with a table called MainTable. MainTable has an Id field that is the primary key, ValData with a varchar data type, Fid and Fid2 are number data types. Fid and Fid2 are foreign key values that are taken from another table.
Id   ValData   Fid   Fid2
1    abc         34    2
2    efg          23    34
3    zeo         25    43Sometimes someone can enter duplicate data and the ValData, Fid and Fid2 will end up like this:
Id   ValData   Fid   Fid2
1    abc          34    2
2    efg           23    34
3    zeo          25    43
4    zeo          25    43Is there anything in Java I can implement to prevent duplicate data entry in the above example?

Thanks,
It now works after I added unique constraints to each of the fields.
Google results showed me that ORA-00001 is Oracle's Duplicate message and I added a condition in the SQLException Catch area and it catches all my Duplicate entry attempts and shows message to the user.
Here is what I have and would like to know if this is an efficient way I am doing this??
try
// db stuff
catch (SQLException sqle)
    String message = sqle.getMessage();  
    if (message.indexOf("ORA-00001") != -1)
        out.println("Duplicate Data entry.");  
    else
      out.println("some other ORA error");  
}

Similar Messages

  • Page level validation to prevent duplicate data entry into the database

    Hello,
    Can anyone please help me out with this issue.
    I have a form with two items based on a table. I already have an item level validation to check for null. Now I would like to create a page level validation to check that duplicate data are not entered into the database. I would like to check the database when the user clicks on ‘Create’ button to ensure they are not inserting duplicate record. If data already exist, then show the error message and redirect them to another page. I am using apex 3.2
    Thanks

    Hi,
    Have you tried writing a PLSQL function to check this?
    I haven't tested this specifically, but something like this should work:
    1) Create a Page Level Validation
    2) Choose PLSQL for the method
    3) Choose Function Returning Boolean for the Type
    For the validation code, you could do something like this:
    DECLARE
        v_cnt number;
    BEGIN
        select count(*)
        into v_cnt
        from
        your_table
        where
        col1 = :P1_field1 AND
        col2 = :P2_field2;
        if v_cnt > 0 then return false;
        else return true;
        end if;
    END;If the query returns false, then your error message will be displayed.
    Not sure how you would go about redirecting after this page though. Maybe just allow the user to try again with another value (in case they made a mistake) or just press a 'cancel' button to finish trying to create a new record.
    Amanda.

  • Prevent duplicate entry

    I would like to make sure there are no duplicate data entries in my Oracle 9i table (called MainTable) which has an Id field that is the primary key, ValData with a varchar data type, Fid and Fid2 are number data types.
    Id   ValData   Fid   Fid2
    1    abc       34    2
    2    efg       23    34
    3    zeo       25    43Sometimes someone can enter a duplicate ValData, Fid and Fid2 and it will end up like this:
    Id   ValData   Fid   Fid2
    1    abc       34    2
    2    efg       23    34
    3    zeo       25    43
    4    zeo       25    43What constraints or restrictions can I place on the MainTable where it will never allow a duplicate entry into the table?
    I would like to do this somehow in the database. If someone tries to enter a duplicate I should get a error message or something to indicate an attempt to enter duplicate data.
    Please advise if this is possible?

    We told you above - next level of support is onsite but not sure your zipcode is similar to mine.
    First you have to clarify if the three fields must be uniq as a combination or if valdata alone cannot be duplicated. In other words:
    id valdata fid fid2
    1 abc 34 2
    2 abc 23 34
    is this legal?
    Depending on the answer you apply the appropriate solution. If answer is yes, you apply this:
    alter table <table name> add constraint uniq_combination unique (valdata, fid, fid2);
    if answer is that it is illegal because you want to prevent valdata alone to assume duplicate values, then:
    alter table <table name> add constraint uniq_valdata unique (valdata);
    See Guido's comment above concerning the handling of the DB generated error.
    enrico

  • How to Prevent duplicates on Combination of Lookup columns in sharepoint 2010 using infopath 2010 form.

    Hi All,
    I have list with some Lookup columns like  City, Pin, and Text Column Name. All these are required columns.
    Now I want to prevent duplicates while submitting InfoPath form if a Combination of  City,Pin & Name. (like a Composite primary in Database is used.)
    Can some one help me on how to achieve this using InfoPath  2010 Rules, writing  rule in Xpath.
    Thanks in Advance.

    1. Add a secondary data connection to the list where the form will be submitted.
    2. Prior to submit via rules, set the query fields in the above connection: City, Pin & Name with values entered in the form. Query the data source and check if the result has values.
    3. Show error messages accordingly if exists else continue with Submit.
    This post is my own opinion and does not necessarily reflect the opinion or view of Slalom.

  • Insert data into table 1 but remove the duplicate data

    hello friends,
    i m trying to insert data into table tab0 using hints,
    query is like this..
    INSERT INTO /*+ APPEND PARALLEL(tab0) */ tab NOLOGGING
    (select /*+ parallel(tab1)*/
    colu1,col2
    from tab1 a
    where a.rowid =(select max (b.rowid) from tab2 b))
    but this query takes too much time around 5 hrs...
    bz data almost 40-50 lacs.
    i m using
    a.rowid =(select max (b.rowid) from tab2 b))....
    this is for remove the duplicate data..
    but it takes too much time..
    so please can u suggest me any ohter option to remove the duplicate data so it
    resolved the optimization problem.
    thanks in advance.

    In the code you posted, you're inserting two columns into the destination table. Are you saying that you are allowed to have duplicates in those two columns but you need to filter out duplicates based on additional columns that are not being inserted?
    If you've traced the session, please post your tkprof results.
    What does "table makes bulky" mean? You understand that the APPEND hint is forcing the insert to happen above the high water mark of the table, right? And you understand that this prevents the insert from reusing space that has been freed up because of deleted in the table? And that this can substantially increase the cost of full scans on the table. Did you benchmark the INSERT without the APPEND hint?
    Justin

  • Need advice on preventing duplicate entries in People table

    Hi,
    In my database, I have a "People" table where I store basic information about people e.g. PersonId, FirstName, LastName, Gender, etc.
    There will be lots of entries made into this table and I want to prevent duplicate entries as much as humanly possible. I'd appreciate some pointers on what I should do to minimize duplicates.
    My primary concerns are:
    Duplicate entries for the same person using the person's full name vs. given name e.g. Mike Smith and Michael Smith
    Making sure that two separate individuals with identical names do get entered into the table and get their unique PersonId's.
    Not even sure how I can even possibly know if two individuals with identical names are two different people without having additional information but I wanted to ask the question anyway.
    Thanks, Sam

    Thank you all very much for your responses.
    There are three separate issues/points here.
    It is clear that it is impossible to prevent duplicates using only a person's first, middle and last names. Once I rely on an additional piece of information, then things get "easier" though nothing is bullet proof. I felt that this was self evident but
    wanted to ask the question anyway.
    Second issue is "potential" duplicates where there are some variations in the name e.g. Mike vs Michael. I'd like a bit more advice on this. I assume I need to create a table to define variations of a name to catch potential duplicates.
    The third point is what Celko brought up -- rather nicely too :-) I understand both his and Erland's points on this as typical relational DB designs usually create people/user tables based upon their context e.g. Employees, Customers, etc.
    I fundamentally disagree with this approach -- though it is currently the norm in most commercial DB designs. The reason for that is that it actually creates duplicates and my point is to prevent them. I'm going for more of an object based approach in the DB
    design where a person is a person regardless of the different roles he/she may play and I see no reason in repeating some of the information about the person e.g. repeating first, last name, gender, etc in both customer and employee tables.
    I strongly believe that all the information that are directly related to a person should be kept in the People table and referenced in different business contexts as necessary.
    For example, I assign every person a PersonId in the People table. I then use the PersonId as part of the primary key in the Customers or Employees table as well. Obviously, PersonId is also a foreign key in Customers and Employees tables. This prevents the
    need for a separate CustomerId and allows me to centralize all the personal data in the People table.
    In my opinion this has three advantages:
    Prevent duplication of data
    Allow global edits e.g. if the last name of a female employee changes, it is automatically updated for her within the context of "Customer" role she may play in the application.
    Last but not least, data enrichment where a person may enter additional data about himself/herself in different contexts. For example, in the employee context, we may have the person's spouse information through "Emergency Contacts" which may come handy
    within the context of customer for this person.
    Having everyone in the People table gives me these three advantages.
    Thanks, Sam

  • Preventing duplicate post

    Does anyone have a suggestion regarding how to prevent duplicate posts with JSF?
    As an example, I have one form that allows users to carry out CRUD operations on the data that is being displayed. All successfully processed requests result in a post back to this same page.
    I really want to avoid the problems that could occur if a delete or new request gets reposted by an impatient user.
    Any suggestions are welcome. I should also note that for scalability reasons, I prefer to avoid solutions which require storing token, etc. in the session. I will settle for such a strategy in the absence of a better solution.

    Thanks to all who responded. Just a couple follow-up questions and comments.
    1. Can not use Shale. I'm in a heavily restricted corporate environment that limits me to IBM's (poor) implementation of JSF 1.0.
    2. Would rather not address the problem soley through the use of JavaScript because users can always disable JavaScript.
    3. Redirecting back to the page instead of forwarding back is not practical for three reasons. One, it will require my backing bean to make calls down into my application to retrieve data that would already have been readily available if I were forwarding, and I'm not comfortable with that performance decrease. Two, redirecting back to the form makes displaying of validation errors difficult. Three, redirecting to the page would successfully handle the circumstance where a user hits refresh, but doesn't handle the case of an impatient user who clicks the submit button twice while waiting for a response.
    If anyone can propose a solution that works within these constraints, let me know.
    Thanks again for those who are contributing.

  • How to prevent duplicate submission?

    Hello!
    I have a page for gathering some data:
    private function saveHander():void {
        // Gathering data into database.
    <mx:TextInput id="name" />
    <mx:TextInput id="age" />
    <s:Button id="Submit" label="Submit" click="saveHandler();" />
    When focus on submit button,I click at the button and press the space bar at the same time,saveHander function execute two times.
    How can I prevent duplicate submission? thanks!

    Assuming this is a form you are attempting to save, good UX practice is to disable the form once you begin local form validation (submit button clicked) prior to forming your data packet for save.
    Doing so will prevent the anomaly you are experiencing.
    In other cases where the above practice is not practical, you can setFocus to another component (off the submit button) because as long as the focus remains on the button, depressing the space bar will trigger the mouse event for the button. Research ADA compliance if you want more details.
    HTH. 

  • How to prevent duplicate records

    Hello all,
    This is just a general query. How do we prevent duplicate records at data target level(ODS/InfoCube).
    Thanks
    S N

    Hi,
    for ODS you can either specify in settings option "unique records" (only first combination of keys is saved, second one throws error message) or you can define "overwrite" for update rules to ODS (last occurrence of record is saved).
    For InfoCubes I think there isn't such setting and there is no possibility of overwriting existing records. So you have to load first to an ODS object to ensure unique records and update from ODS to InfoCube.
    Best regards,
    Björn

  • Prevent duplicate outputs within a table?

    Hi at all,
    we have a table in a subform. At the moment there are several lines with the same material. This is ok.
    Is it possible to prevent duplicate output in the form? We want to print out the rest of this line but set the material to blank.
    I tried this with scripting with java script:
    If (data.TF.DATA.field.rawValue == data.TF.DATA[-1].field.rawValue)
         data.TF.DATA.field.rawValue = "";
    Is it possible to point on the previous line with
    DATA[-1]
    This works for the conditional break so I hope this will works too.
    I don`t want to use group levels...
    Thanks.
    Timo

    Thanks for your answer.
    It works.
    data.formular.DATA.field::ready:layout - (FormCalc, client)
    If ($.rawValue == data.form.DATA[-1].field.rawValue) then
         this.presence = "hidden";
    endif

  • Prevent duplicate login

    Hi there,
    I wonder if there is any approach to prevent duplicate login to Weblogic
    server using the same userID and password.(weblogic provided or programmatic
    is OK). I tried to use a table to maintain the current active user
    information, but when the user just quits the browser or the weblogic server
    is shut down, this will not work because the flag still remain in the table.
    Does anyone have try it before or know how to do it?
    Thanks in advanced.
    Ken

    Thanks for your reply.
    Actually, I am now using the similar solution except that I place a static
    field(a hashtable) in the class that implements the
    HttpSessionBindingListener to record the current users rather than store the
    information in database table. Thus when the application server shuts down,
    I don't need to clear the dirty data in the table.
    Ken
    Andy <[email protected]> wrote in message
    news:[email protected]...
    >
    i'm doing the same thing with an application. i've extended theAuthFilter class
    and whenever a user logs into the application i insert a row into a"current users"
    table. i also set an object into the user's session that implements theHttpSessionBindingListener.
    when the session expires (either by the user by logging out or times outwithin
    weblogic) the server calls my class that was inserted into the user'ssession
    at which time i remove the row from the "current users" table.
    hope this helps -
    "Neil Smithline" <[email protected]> wrote:
    I believe that due to the loose coupling of a web browser and the server
    as
    defined in the HTTP spec, there is no way to ensure that both sides have
    an
    identical concept of "logged in". Any solution you propose will have
    errors
    as you described below. The server just plain can't tell the difference
    between a slow-to-respond user, a user who's browser has crashed, a user
    who
    is having network problems, etc... This is not a WLS specific problem,
    it
    is HTTP.
    Neil Smithline
    WLS Security Architect
    BEA Systems
    "Ken Hu" <[email protected]> wrote in message
    news:[email protected]...
    Hi there,
    I wonder if there is any approach to prevent duplicate login to
    Weblogic
    server using the same userID and password.(weblogic provided orprogrammatic
    is OK). I tried to use a table to maintain the current active user
    information, but when the user just quits the browser or the weblogicserver
    is shut down, this will not work because the flag still remain in thetable.
    Does anyone have try it before or know how to do it?
    Thanks in advanced.
    Ken

  • Preventing duplicate rows insertion

    suppose i have a table with 2 columns
    no constraints are there
    then how i will prevent duplicate rows insertion using triggers.

    but i tried to solve the poster's requirement.yes, but the trigger does not solve it.
    The example you posted above, try this:
    do the first insert in your first sql*plus session, and then without committing, open another sql*plus session and do the second insert.
    Do you see an error?
    SQL> create table is_dup(x number, y varchar2(10));
    Table created.
    SQL> CREATE OR REPLACE TRIGGER chk
      2      BEFORE INSERT ON is_dup
      3      FOR EACH ROW
      4  BEGIN
      5      FOR i IN (SELECT * FROM is_dup)
      6      LOOP
      7          IF (:NEW.x = i.x) AND
      8             (:NEW.y = i.y)
      9          THEN
    10              raise_application_error(-20005, 'Record already exist...');
    11          END IF;
    12      END LOOP;
    13  END;
    14  /
    Trigger created.
    SQL> insert into is_dup values(123,'MYNAME');
    1 row created.
    SQL>
    SQL> $sqlplus /
    SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 10:17:07 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> insert into is_dup values(123,'MYNAME');
    1 row created.
    SQL> exit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> select * from is_dup ;
             X Y
           123 MYNAME
           123 MYNAME
    SQL> commit ;
    Commit complete.
    SQL> select * from is_dup ;
             X Y
           123 MYNAME
           123 MYNAME
    SQL>

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • DTP Error: Duplicate data record detected

    Hi experts,
    I have a problem with loading data from DataSource to standart DSO.
    In DS there are master data attr. which have a key  containing id_field.
    In End routine I make some operations which multiple lines in result package and fill new date field - defined in DSO ( and also in result_package definition )
    I.E.
    Result_package before End routine:
    __ Id_field ____ attra1 ____  attr_b  ...___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1         
       ____2________ a2______ b2_________ x2       
    Result_package after End routine:
    __ Id_field ____ attra1 ____  attr_b  ..___   attr_x ____ date_field
       ____1________ a1______ b1_________ x1______d1         
       ____2________ a1______ b1_________ x1______d2    
       ____3________ a2______ b2_________ x2______d1         
       ____4________ a2______ b2_________ x2______d2   
    The  date_field (date type)  is in a key fields in DSO
    When I execute DTP I have an error in section Update to DataStore Object: "Duplicate data record detected "
    "During loading, there was a key violation. You tried to save more than one data record with the same semantic key."
    As I know the result_package key contains all fields except fields type i, p, f.
    In simulate mode (debuging) everything is correct and the status is green.
    In DSO I have uncheched checkbox "Unique Data Records"
    Any ideas?
    Thanks in advance.
    MG

    Hi,
          In the end routine, try giving
    DELETE ADJACENT DUPLICATES FROM RESULT_PACKAGE COMPARING  XXX  YYY.
    Here XXX and YYY are keys so that you can eliminate the extra duplicate record.
    Or you can even try giving
        SORT itab_XXX BY field1 field2  field3 ASCENDING.
        DELETE ADJACENT DUPLICATES FROM itab_XXX COMPARING field1 field2  field3.
    this can be given before you loop your internal table (in case you are using internal table and loops)  itab_xxx is the internal table.
    field1, field2 and field 3 may vary depending on your requirement.
    By using the above lines, you can get rid of duplicates coming through the end routine.
    Regards
    Sunil
    Edited by: Sunny84 on Aug 7, 2009 1:13 PM

  • How to delete the duplicate data  from PSA Table

    Dear All,
    How to delete the duplicate data  from PSA Table, I have the purchase cube and I am getting the data from Item data source.
    In PSA table, I found the some cancellation records for that particular records quantity  would be negative for the same record value would be positive.
    Due to this reason the quantity is updated to target but the values would summarized and got  the summarized value  of all normal and cancellation .
    Please let me know the solution how to delete the data while updating to the target.
    Thanks
    Regards,
    Sai

    Hi,
    in deleting the records in PSA table difficult and how many you will the delete.
    you can achieve the different ways.
    1. creating the DSO maintain the some key fields it will overwrite the based on key fields.
    2. you can write the ABAP logic deleting the duplicate records at info package level check with the your ABAPer.
    3.you can restrict the cancellation records at query level.
    Thanks,
    Phani.

Maybe you are looking for

  • Memory Leak Acrobat 9.3.1 Batch Conversion

    I set Acrobat to the task of converting ~3.5k text documents to PDFs, using the File | Create PDF | Batch Create Multiple Files menu option. The interface was slow handling the list of 3500 items, but it began processing them at a rate of about 1/sec

  • Camera not working, Facebook, twitter and foursquare doesnt work too

    Hi, I have a few problems with my Blackberry Z10. It is a brand new phone and I just used it for less than 2 weeks. Issue 1: The camera sometimes is not working. The back camera. It occurred 3 times in less than 2 weeks. First time, it say unable to

  • Where are my photos from my old phone? They were backed up in iCloud.

    I got a new iPhone 4S and my old photos were backed up on the icloud. Now I cannot find them on my new phone (same model), but under "manage storage" in settinngs, it shows both my old phone and my new phone data. Please help me access my old photos!

  • JAAS Login Module development/deployment  - getting en error

    Guys, I have developed a JAAS Login Module (as per the SAP documentation) and configured the J2EE Engine  (as per the SAP documentation) for this module to sit amongst several other standard modules,  but I have a problem. I am unable to get the Modu

  • Removeable Anti-Glare screen for new iPad?

    To clarify, I am looking for something that is meant to be taken off and on. For those infrequent ocassions when I'm outside or in a very high light environment and want to read.  I do not want something that applies like a screen protector. I alread