Moving Schema

Hi,
We have a requirement to move a huge schema from one database to another using RMAN. (similar to export and import)
Can someone please provide the steps/scripts or link where i can get this details.
Thanks!

bLaK wrote:
Hi,
We have a requirement to move a huge schema from one database to another using RMAN. (similar to export and import)
Can someone please provide the steps/scripts or link where i can get this details.
Thanks!RMAN wont handle backup or restore of logical objects.
Use expdp/impdp to move schema.
Using RMAN you can restore/duplicate of entire database.

Similar Messages

  • Slapd_user.*.conf and iDS 5.0

    Hi,
    Previously in ids4.x i used to carry over my schema to a new instance
    using slapd_user.oc.conf and slapd_user.at.conf. A casual look in the
    config directory of iDS5 shows that things have changed :-)
    1. Does iDS5 support these conf files?
    2. Is there an equivalent way of importing user defined stuff in iDS5?
    or will i have to use ldapmodify etc?
    3. Does the iDS5 documentation address the recommended procedure for
    moving schemas?
    Thanks
    -Wajih

    Wajih Ahmed wrote:
    Hi,
    Previously in ids4.x i used to carry over my schema to a new instance
    using slapd_user.oc.conf and slapd_user.at.conf. A casual look in the
    config directory of iDS5 shows that things have changed :-)
    1. Does iDS5 support these conf files?No. User defined schema is now contained in slapd-x/config/schema/99user.ldif
    2. Is there an equivalent way of importing user defined stuff in iDS5?You can:
    1) shutdown the server, edit this file, restart the server
    2) add the schema over LDAP using ldapmodify
    3) create your own schema file (e.g. 75my-schema.ldif), copy it to the slapd-x/config/schema directory, restart the server
    or will i have to use ldapmodify etc?
    3. Does the iDS5 documentation address the recommended procedure for
    moving schemas?Yes. The migration procedure migrates everything, including your user defined schema.
    Thanks
    -Wajih

  • XE Prod installed successfully on ... (OS & hardware)

    ... SuSE 10. - Dell Latitude C840 - 1GB RAM
    It's the only Oracle DB on that system although many, from 8i on, have cycled thru.
    Uninstalled the beta (as root, rpm -e oracle-xe). Installation 'by the book' - no special effort required at all.
    Congrats to the Oracle team, and especially to our hosts Mark Townsend and Tom Kyte.
    /Hans
    (encouraging people to record their OS and hardware)

    Installed successfully on a Linux box running 64-bit CentOS 4. Hardware is a AMD Athlon 64 3700+ with 2 GB of RAM and a pair of 10,000 RPM Raptors. Ten minutes to download, five minutes to install. Really quite painless.
    Also installed XE on a Dell Inspiron 9300 notebook running Windows XP SP2 with 2 GB of RAM. Quick to install, everything works great. I'm able to load a J2EE application server, a Java-based development environment, and Oracle XE and still maintain good performance. That's three memory hogs working well together under Windows.
    Finally I tested moving schemas and Apex applications between the Linux and Windows systems several times, everything worked as expected. I will be developing XE Apex applications on Windows and then deploying them on Linux systems.

  • Performance slows down when moving from stage to test schema within same instance with same database table and objects

    We have created a stage schema and tested application which is working fine when we are moving it to another schema for further testing ( This schema is created using same scripts which were used to create objects in staging schema) the performanc of application (Developed in .NET) slows down drastically
    Some of the store procedures we have checked at Databse/SQLdeveloper level are giving almost same performance but at Application level there is lot of difference
    Can you please help
    We are using Oracke 11g Database

    Are you using the Database Cloud Service?  You cannot create schemas in the Database Cloud Service, which makes me think you are not.  This forum is only for the Database Cloud Service.
    - Rick Greenwald

  • Moving Subpartitions to a duplicate table in a different schema.

    +NOTE: I asked this question on the PL/SQL and SQL forum, but have moved it here as I think it's more appropriate to this forum. I've placed a pointer to this post on the original post.+
    Hello Ladies and Gentlemen.
    We're currently involved in an exercise at my workplace where we are in the process of attempting to logically organise our data by global region. For information, our production database is currently at version 10.2.0.3 and will shortly be upgraded to 10.2.0.5.
    At the moment, all our data 'lives' in the same schema. We are in the process of producing a proof of concept to migrate this data to identically structured (and named) tables in separate database schemas; each schema to represent a global region.
    In our current schema, our data is range-partitioned on date, and then list-partitioned on a column named OFFICE. I want to move the OFFICE subpartitions from one schema into an identically named and structured table in a new schema. The tablespace will remain the same for both identically-named tables across both schemas.
    Do any of you have an opinion on the best way to do this? Ideally in the new schema, I'd like to create each new table as an empty table with the appropriate range and list partitions defined. I have been doing some testing in our development environment with the EXCHANGE PARTITION statement, but this requires the destination table to be non-partitioned.
    I just wondered if, for partition migration across schemas with the table name and tablespace remaining constant, there is an official "best practice" method of accomplishing such a subpartition move neatly, quickly and elegantly?
    Any helpful replies welcome.
    Cheers.
    James

    You CAN exchange a subpartition into another table using a "temporary" (staging) table as an intermediary.
    See :
    SQL> drop table part_subpart purge;
    Table dropped.
    SQL> drop table NEW_part_subpart purge;
    Table dropped.
    SQL> drop table STG_part_subpart purge;
    Table dropped.
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    Index created.
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'C');
    1 row created.
    SQL> insert into part_subpart values (11,'A');
    1 row created.
    SQL> insert into part_subpart values (11,'C');
    1 row created.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    Table created.
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    Table truncated.
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
         COL_1 COL_2
            11 A
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    no rows selected
    SQL>I have exchanged subpartition p_2_s_1 out of the table part_subpart into the table NEW_part_subpart -- even with a different name for the subpartition (n_p_2_s_1) if so desired.
    NOTE : Since your source and target tables are in different schemas, you will have to move (or copy) the staging table STG_part_subpart from the first schema to the second schema after the first "exchange subpartition" is done. You will have to do this for every subpartition to be exchanged.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Apr 4, 2011 10:19 AM
    Added clarification for cross-schema exchange.

  • What is the impact on an Exchange server when moving FSMO role and schema master into another DC?

    What is the impact on an Exchange server when moving FSMO role and schema master into another DC? What do we have to do on exchange after performing a such task?
    I had 1 DC (Windows server 2008 R2), 1 Exchange 2010 SP3. I install a new DC (Windows server 2008 R2). I then move all the FSMO role including the schema master role into the NEW DC. I check to be sure that the new DC is a GC as well.
    I shutdown the old DC and my Exchange server was not working properly and specially Exchange Management Shell. It start working again after I turn up the older DC.
    I am wondering why Exchange did not recognize the new DC, even after moving all the roles on it.
    I am looking to hearing from you guys.
    Thanks a lot

    if you only have 1 DC, you might need to cycle the AD Topology service after shutting the one down.
    Also, take a look in the windows logs, there should be an event where Exchange goes to discover Domain Controllers, make sure both are listed there.  You can probably force that by cycling AD topology (this will take all services down so be careful
    when you do it)
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread

  • PLS-00201 error after moving a package to a new scheme

    hi
    I've moved a package to a new schema and all the packages in the original schema that reference the moved package now fail to compile. The moved package has had a public synonym created and the execute privileges assigned to the original schema by role. what am i missing? Using 11gR2 version 11.2.0.3.0

    Privileges granted through roles do not apply to stored procedures and packages that are compiled with definer rights (the default).  You need to grant the original schema execute privileges on the new schema's package directly.
    John

  • Moving Target DB and Abstract Schema

    I apologize in advance for seeming clueless. My explanation is this: There is no money. I have inexperience staff. I've been away from building architectures too long to be specific. I can't buy a contractor. I need some advice.
    We are converting many Access applications to Java/J2EE/AnyRerelationalDB The way we have planned to approach this is to divide the DBs into various classes, say Personnel records, Vehicles, ...so on). These DBs will be moving targets that will change as we are able to discover Access applications that add/change features to whatever class of DB we're working with at the moment.
    My goal is to eliminate changing each and every App everytime some DB parameter changes (DBMS, changed attribute, ..., etc). I think EBJ/abstract schemas will let me to get a generic view of the DB and insulate the App from the very real possibility of changing DB parameters.
    I need some help verifying this or pointing me in a better direction.
    Thanks for your help,
    Bob

    I apologize in advance for seeming clueless. My
    explanation is this: There is no money. I have
    inexperience staff. I've been away from building
    architectures too long to be specific. I can't buy a
    contractor. I need some advice.
    We are converting many Access applications to
    Java/J2EE/AnyRerelationalDB The way we have planned
    to approach this is to divide the DBs into various
    classes, say Personnel records, Vehicles, ...so on).
    These DBs will be moving targets that will change as
    s we are able to discover Access applications that
    add/change features to whatever class of DB we're
    working with at the moment.My first advice is that the description of you team doesn't bode well for the success of the project described in the second.
    Let me frame it in another context to illuminate how dubious this sounds:
    I want to build house with curved glass walls, high vaulted ceilings perched on a steep hillside. There is no money. I have inexperienced staff. I've been away from building houses too long to be specific. I can't buy a contractor.
    My goal is to eliminate changing each and every App
    everytime some DB parameter changes (DBMS, changed
    attribute, ..., etc). I think EBJ/abstract schemas
    will let me to get a generic view of the DB and
    insulate the App from the very real possibility of
    changing DB parameters.If you use an EJB layer than supports XDoclet or other portable CMP, yes, it will do this. However, it's not a simple and it your table structure changes significantly, your EJB will not work autmatically. The fact of the matter is that EJB is pretty complex and requires a lot of esoteric knowledge. Many EBJ projects have failed or produced terrible results. If you don't have any very capable developer/designers and/or have no developers with solid EJB experience I would under no circustances attempt this. EBJ is often overkill anyway. The real point of EJB is to help with distributed computing, not to abstract away the DB schema.
    I simple approach that many people overlook is to use stored procedures. Stored procedures create a layer of abstraction between your code and the DB such that the DB can change without changing the code.

  • EJB, Moving Target DB, Abstract Schema

    I apologize in advance for seeming clueless. My explanation is this: There is no money. I have inexperience staff. I've been away from building architectures too long to be specific. I can't buy a contractor. I need some advice.
    We are converting many Access applications to Java/J2EE/AnyRelationalDB The way we have planned to approach this is to divide the DBs into various classes, say Personnel records, Vehicles, ...so on). These DBs will be moving targets that will change as we are able to discover Access applications that add/change features to whatever class of DB we're working with at the moment.
    My goal is to eliminate changing each and every App everytime some DB parameter changes (DBMS, changed attribute, ..., etc). I think EBJ/abstract schemas will let me to get a generic view of the DB and insulate the App from the very real possibility of changing DB parameters.
    I need some help verifying this or pointing me in a better direction.
    Thanks for your help,
    Bob

    I think your best option is to implement CMP entity beans with a facade of services (business logic) that access the beans as tables in a DB. The only advantage of doing this will be the DB vendor independence and transparency because you define static queries in a declarative way.
    I don't quite understand what you mean by DB parameters. But if you refer to changes to the database schema like new tables, new fields or changes to existing fields, you still need to align those changes with the attribuites in your application.
    Cheers

  • SDK Schemas have moved - but where?

    Probably because of the recent OTN web site reorganization, the XSD files for XML extensions have moved. Would someone please update http://wiki.oracle.com/page/SQL+Dev+SDK+How+Tos to point to the new locations of navigator.xsd, query.xsd, editors.xsd ...
    Thanks.

    Would someone(Sue Harper?) please give us an ETA on providing the info we developers need to develop extensions to SqlDeveloper?
    We have been waiting over a year to:
    1. Get the XSD files the poster refers to - the list of XSDs is on the page cited but all of the links take you to a generic download page and the XSDs are nowhere to be found. These must exist somewhere so it is very frustrating that no one on the development team will provide them.
    2. Get the API Javadocs so we can understand the java classes available and how to use them. As with #1, these must be available to the development team so why won't you release them to us?
    3. Get a working example of a Jave extension. The lone example provided is not useful since it is really just an XML extension written in Java.l A useful Java extension would show how to create the hooks to cause SqlDeveloper to perform callbacks to the Java extension code when certain user actions take place. Same here as with #1 and #2. It's hard to beleive that someone on the dev team doesn't have the code for a simple Java extension with callbacks.
    JDeveloper has 'hook' elements in its example extension.xml files but there is no documentation for SqlDeveloper to show the equivalent.
    Please either provide the above requested items, provide an ETA on when you will provide the items or at least be gracious enough to tell us you won't provide the items.
    I'm sure there are many like myself that would love to start working with extensions but can't because you won't share information and data that almost assuredly already exists.
    What is your position on these issues?

  • Moving data between two schemas

    I need to move data between to schemas. I have a created packaged code to accomplish this. The problem is the execution time. When running the insert statements from the source schema to insert data into the target schema, it takes considerably longer to complete the statement than if I copied the tables from the source schema into the target schema and executed the same statement in the target schema. Any insight as to why this might be?
    Also all data resides on the same physical disk, running version 10g on a W2K server.
    Thanks in advance
    Here is a sample of one of the insert statements:
    INSERT INTO target_table(tt_id, tt_disp, tt_date, tt_emp_1, tt_emp_2, tt_emp_3)
    SELECT src_tab.src_id,
    src_tab.scr_disp,
    src_tab.scr_date,
    src_tab.scr_emp_1,
    src_tab.scr_emp_2,
    src_tab.scr_emp_3
    FROM
    (SELECT
    row_number() over(
    ORDER BY SUBSTR(fn_cil_sort_format(SUBSTR(src_cil,
    1, 8)), 1, 4), SUBSTR(src_cil, 4, 8)) AS src_id,
    scr_disp,
    fn_date_format(date_time) AS scr_date,
    v_convert AS scr_emp_1,
    v_convert AS scr_emp_2,
    v_convert AS scr_emp_3
    FROM source_table
    ORDER BY SUBSTR(fn_sort_format(SUBSTR(src_cil, 1, 8)), 1, 4),
    SUBSTR(src_cil, 4, 8)) src_tab
    WHERE scr_disp IS NOT NULL;

    In addition to the above post, you should create the table initially with NOLOGGING. create table as select has the ability to bypass logging. This should increase performance very much. No log writes will have to be taking place.
    Lee

  • Moving all Materialized View and logs at schema level using data pump

    Hi Experts,
    Please help me on how can I exp/imp all Materialized Views andMV logs (as these are local MVs) only of complete schema to other database. I want to exlude everything else.
    Regards
    -Samar-

    Use DBMS_METADATA. Create the following SQL script:
    SET FEEDBACK OFF
    SET SERVEROUTPUT ON FORMAT WORD_WRAPPED
    SET TERMOUT OFF
    SPOOL C:\TEMP\MVIEW.SQL
    DECLARE
        CURSOR V_MLOG_CUR
          IS
            SELECT  DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW_LOG',LOG_TABLE) DDL
              FROM  USER_MVIEW_LOGS;
        CURSOR V_MVIEW_CUR
          IS
            SELECT  DBMS_METADATA.GET_DDL('MATERIALIZED_VIEW',MVIEW_NAME) DDL
              FROM  USER_MVIEWS;
    BEGIN
        DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',TRUE);
        FOR V_REC IN V_MLOG_CUR LOOP
          DBMS_OUTPUT.PUT_LINE(V_REC.DDL);
        END LOOP;
        FOR V_REC IN V_MVIEW_CUR LOOP
          DBMS_OUTPUT.PUT_LINE(V_REC.DDL);
        END LOOP;
    END;
    SPOOL OFFIn my case script is saved as C:\TEMP\MVIEW_GEN.SQL. Now I'll create a mview log and mview in SCOTT schema and run the above script:
    SQL> CREATE MATERIALIZED VIEW LOG ON EMP
      2  /
    Materialized view log created.
    SQL> CREATE MATERIALIZED VIEW EMP_MV
      2  AS SELECT * FROM EMP
      3  /
    Materialized view created.
    SQL> @C:\TEMP\MVIEW_GEN
    SQL> Running script C:\TEMP\MVIEW_GEN.SQL generated a spool file C:\TEMP\MVIEW.SQL:
      CREATE MATERIALIZED VIEW LOG ON "SCOTT"."EMP"
    PCTFREE 10 PCTUSED 30 INITRANS
    1 MAXTRANS 255 LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1       
    MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL
    DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "USERS"
    WITH PRIMARY KEY EXCLUDING NEW VALUES;
      CREATE MATERIALIZED VIEW "SCOTT"."EMP_MV" ("EMPNO", "ENAME", "JOB", "MGR",  
    "HIREDATE", "SAL", "COMM", "DEPTNO")
      ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 
    INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576
    MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE 
    "USERS"
      BUILD IMMEDIATE
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE    
    DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "USERS"
      REFRESH FORCE ON     
    DEMAND
      WITH PRIMARY KEY USING DEFAULT LOCAL ROLLBACK SEGMENT
      USING ENFORCED
    CONSTRAINTS DISABLE QUERY REWRITE
      AS SELECT "EMP"."EMPNO"                    
    "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR"                    
    "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM"               
    "COMM","EMP"."DEPTNO" "DEPTNO" FROM "EMP" "EMP";
                                   Now you can run this on other database. You might need to adjust tablespace and storage clauses. Or you can add more DBMS_METADATA.SET_TRANSFORM_PARAM calls to C:\TEMP\MVIEW_GEN.SQL to force DBMS_METADATA not to include tablespace or/and storage clauses.
    SY.

  • Moving only data between 2 oracle schemas

    Hi Folks,
    I would like to know if there is some way to move only the data (without recreate structure) between 2 schemas in the same instance. Could someone help me on this challenge?
    Best Regards,
    Everton Lucas

    Insert into user1.table select user2.tabla
    or
    impdp/expdp content=data_only
    or
    impdp table_exists_action=append
    or
    where you imagination go....

  • Large data moving from one schema to another schema and tablespace

    Dear DBA's
    Oracle 11.2.0.1
    OS : Solaris 10.
    we have 1 TB data in A schema, i want to move to this in B schema for other testing purpose. which method is good to take export/import of this large data? Kindly advice.
    Thanks and Regards
    SG

    Hi
    You can use expdp-impdp or Transportable Tablespaces. Pelase check below note:
    Using Transportable Tablespaces for EBS Release 12 Using Database 11gR2 [ID 1311487.1]
    Regard
    Helios

  • Moving procedures and sequences from one schema to another

    Hi all,
    Is there any way to export the procedures and sequences alone from one schema to another? If not is there any way to generate the procedure creating scripts from the source schema.
    I used the following script SET HEADING OFF
    SET PAGESIZE 999
    SET LINESIZE 100
    SELECT DBMS_METADATA.GET_DDL('PROCEDURE', NAME, owner) || '/' FROM ALL_SOURCE WHERE
    OWNER='SCOTT' AND TYPE='PROCEDURE'
    SPOOL C:\A.SQL
    SPOOL OFF
    [/CODE]
    But the problem is in the out put script it is cutting the line .. for egCREATE OR REPLACE PROCEDURE "QC_PFIZER_REL5"."SPGETNEXTDS
    S_ID"
    ( V_ID OUT NUMBER )
    IS
    BEGIN
    SELECT SEQUENCE_DSS_Id.NEXTVAL INTO V_ID FROM DUAL;
    END ;
    i experimented with increasing linesize but it is not helping. I am using 9.2.0.5 on windows 2003.
    Thanks
    Muneer

    Similar to getting the code from user_source, you could get sequences from user_sequences.
    SQL> select dbms_metadata.get_ddl('SEQUENCE', sequence_name) from user_sequences ;
    DBMS_METADATA.GET_DDL('SEQUENCE',SEQUENCE_NAME)
       CREATE SEQUENCE  "SCOTT"."SEQ"  MINVALUE 1 MAXVALUE 1.00000000000000E+27 INCR
    EMENT BY 1 START WITH 1 CACHE 20 NOORDER  NOCYCLE
    1 row selected.
    SQL>

Maybe you are looking for

  • Urgent help needed on Macros

    Hi APO Gurus... I want to know how to use Drill_Up and Drill_ Down macros in batch jobs. they work beautifully in interactive mode, but when processed in batch jobs they drill up only for the particular CVC's at the aggregated level specified in the

  • How to call OWSM secured web-service from ADF application

    I have a OWSM secured web-service, which takes username/password. I want to invoke this webservice from ADF application. ADF application has its own security and it takes its own username/password. End user can't provide the username/password for web

  • Trouble with Skype

    i have wiped my hd and reinstalled windows 7 3 times and each time come up with the same problem. Installing skype there was no problem. I logged in using my microsoft account credentials. It logged in the freezes. stopped running . microsoft trouble

  • Calling delete function!

    Hi all I have a bean with my sql stuff inside! I also have a servlet calling the sql functions. I have written my delete function inside my bean and then called the function from the servelet. I have then called used a JSP page to excute the view all

  • Pdf file is too big (7MB). Need a pdf file with max size 4MB. How to minimize the size of a pdf file?

    pdf file is too big (7MB). Need a pdf file with max size 4MB. How to minimize the size of a pdf file?