Ddl changes 2.4.3 - 2.5.8

I noticed changes in the generated ddl by the schema-tool.
I do have 2 classes A + B, A inherits from B.
Both classes lie in different packages.
The ddl generated by kodo2.4.x looks quite compact:
CREATE TABLE BUH_BUCHUNG (AUFGELAUFENEZINSENX NUMBER, BUCHUNGSTYPX NUMBER,
COMMENT1 VARCHAR2(255), COMMENT2 VARCHAR2(255), FORDTYPX NUMBER,
ISUNTERKONTOBUCHUNG NUMBER(1), JDOCLASSX VARCHAR2(255), JDOIDX NUMBER NOT
NULL, JDOLOCKX NUMBER, KONTOMGRID NUMBER, STATUSID NUMBER, TIMESTAMP DATE,
USERID VARCHAR2(255), VALUTAX DATE, VORGANGSIDX NUMBER, ZINSTYPX NUMBER,
ZINSX NUMBER, PRIMARY KEY (JDOIDX));
whereas kodo2.5.x generates the table for the baseclass followed by ALTER
TABLES for the inherited class:
CREATE TABLE BUH_BUCHUNG (BUCHUNGSTYPX NUMBER, COMMENT1 VARCHAR2(255),
COMMENT2 VARCHAR2(255), ISUNTERKONTOBUCHUNG NUMBER(1), JDOCLASSX
VARCHAR2(255), JDOIDX NUMBER NOT NULL, JDOLOCKX NUMBER, KONTOMGRID NUMBER,
STATUSID NUMBER, TIMESTAMP DATE, USERID VARCHAR2(255), VALUTAX DATE,
VORGANGSIDX NUMBER, PRIMARY KEY (JDOIDX));
ALTER TABLE BUH_BUCHUNG ADD AUFGELAUFENEZINSENX NUMBER;
ALTER TABLE BUH_BUCHUNG ADD FORDTYPX NUMBER;
ALTER TABLE BUH_BUCHUNG ADD ZINSTYPX NUMBER;
ALTER TABLE BUH_BUCHUNG ADD ZINSX NUMBER;
Is there a way to switch the behaviour in 2.5, so that I have compact
CREATE TABLES calls without ALTER TABLE like in 2.4 ?

Stefan wrote:
Is there a way to switch the behaviour in 2.5, so that I have compact
CREATE TABLES calls without ALTER TABLE like in 2.4 ?No, but if you upgrade to Kodo 3, you'll see a more compact DDL.
-Patrick

Similar Messages

  • How to track DDL Changes and source code changes

    How can I track the DDL Changes and the Source code (Functions,Procedures,Packages & views) changes made for selective schemas?.
    I mean I want to maintain the history of DDL changes and the sourcecode change history. How to do that? Please provide your guideline with some example...

    Hi,
    you could use a DDL trigger (before create)
    to maybe capture the code and do the audit as well?
    Try this:
    SQL>create table old_code
    2 as
    3 select user username, 0 version, sysdate date_changed, user_source.*
    4 from user_source
    5 where 1=0
    6 /
    Table created.
    SQL>create sequence version_seq;
    Sequence created.
    SQL> create or replace trigger create_trigger
    2 before create on schema
    3 declare
    4 l_date date := sysdate;
    5 l_ver number;
    6 begin
    7 if (ora_dict_obj_type in ( 'PACKAGE', 'PACKAGE BODY', 'PROCEDURE',
    'FUNCTION' ) )
    8 then
    9 select version_seq.nextval into l_ver from dual;
    10
    11 insert into old_code
    12 select user, l_ver, l_date, user_source.*
    13 from user_source
    14 where name = ora_dict_obj_name
    15 and type = ora_dict_obj_type;
    16 end if;
    17 end;
    18 /
    Trigger created.
    SQL> create or replace function f return number
    2 as
    3 begin
    4 return 0;
    5 end;
    6 /
    Function created.
    SQL> select * from old_code;
    no rows selected
    SQL> create or replace function f return date
    2 as
    3 begin
    4 return sysdate;
    5 end;
    6 /
    Function created.
    ops$[email protected]> select * from old_code;
    USERNAME VERSION DATE_CHAN NAME TYPE LINE TEXT
    aaaaaaaaaaa 2 17-OCT-02 F FUNCTION 1 function f return number
    aaaaaaaaaaa 2 17-OCT-02 F FUNCTION 2 as
    aaaaaaaaaaa 2 17-OCT-02 F FUNCTION 3 begin
    aaaaaaaaaaa 2 17-OCT-02 F FUNCTION 4 return 0;
    aaaaaaaaaaa 2 17-OCT-02 F FUNCTION 5 end;

  • OBIEE 11g: Physical Layer acting as level of abstraction against ddl change

    Does anyone know of a way to insulate the RPD physical layer from DDL changes on the database. So, for instance a column name changes. Currently we use physical views (PVs) to provide a layer of abstraction but want to see if there is another solution. If the database team changes the name of a column, we want it to be seemless and not require changes within the RPD. There is the physical update in the utility box but this is manual so would not be a solution unless it could be automated. Any ideas on how to handle would be appreciated.
    Thanks,

    I don't think so there is any utility which captures the column name changes as such.
    However you can do these things if your modelling comprises of views or Mviews.
    You can ask your DBA to change the actual table column names but not Mviews or Views name.
    In your view query you always use alias for each of the column in that way you can retain view columns even if base table column changes.
    How ever you should change your view as and when your column name changes..
    Please mark helpful or correct if it answers your question.
    Regards,
    Veeresh Rayan

  • Handling ddl changes in source for synchrnous cdc

    Hi all,
    I am now considering whether to use asynchrnous cdc or synchronous cdc for my BI project and one of the issues we are looking at is how ddl changes to the source are managed. What are the possible solutions to look at for managing ddl changes for synchronous cdc? I understand that for asynchrnous cdc there is this stop_on_ddl option for change sets where you can disable the change sets upon ddl changes to the source tables. How about synchrnous cdc? Are there any ways to deal with ddl changes to the source ?

    I've got an idea, but this has to be done for each column of each SOURCE datastore...
    You could create "ODI conditions" based on your datawarehouse columns.
    Each condition say : "length( name_of_field ) <= XXX"
    I assume you can set the XXX with the substitution method "getColumn()" and patterne "LONGC"
    These condition will be only on STATIC control.
    In your ODI package, before launching your interface, you can drop the source datastore and check these conditions with static control. All records that have a column whose real source lenght is not the same that the lenght defined in ODI will be rejected in E$. And, if you want, delete from the source (facultative).
    You can also imagine an extra custom LKM step that scan all your datastore column (with getcol list method) and compare their length to the target datastore columns and reject the bad data. It will be more difficult to implement (100% custom code), but faster to propagate everywhere.

  • How to restrict one user making DDL changes in another schema

    can anyone please let me know the way to restrict a user making the DDL changes Through Procedure (This user have the execute privilege on procedure owner by another schema) on another schema.
    Let say we have schema A and B. our requirement is, The User A should grant execute privilege on all the procedure that are created on A to user B, but if at all there is a situation a procedure is created with DDL changes(with execute immediate option) and gave the execute privilege to the user B, Then user B can make DDL changes through this procedure. i wanted to restrict user B making DDL changes any way to user A .
    Appreciate your help.
    Thanks,
    Karthik

    Your requirement doesn't sound terribly sensible.  If you want B to be able to execute the procedure, grant B access to the procedure.  If you don't want B to be able to execute the procedure, don't grant B access.  If you are concerned that someone is going to change the code in the procedure after it is created to transform it from something that you want B to have access to into something that you don't want B to have access to, you should be reviewing changes before they are promoted.
    If you're really determined to do things "uniquely", I suppose you could create a DDL trigger that looks to see who the user is and throws an exception if it is B.  But that seems like a poor second choice to dealing with the problem sensibly from the outset.
    If you aren't reviewing the changes and building code that tries to check whether something bad got slipped in, you're going to be fighting a losing battle.  If you write a DDL trigger that checks the user, for example, and I'm a developer that's intent on creating holes, I could simply add code to a procedure that submits a dbms_job that executes DDL.  That job would run as A so your DDL trigger would let it pass.  Of course, you could then turn around and disable DBMS_JOB in which case I could use DBMS_SCHEDULER instead.  You could disable that as well but then you're disabling the scheduler jobs that Oracle runs by default for things like gathering statistics.  And then the rogue developer simply moves on to the next hole to exploit.
    Justin

  • How to identify the DDL changes ?

    Hi all,
    How to identify the DDL changes done to the database ?
    By triggers only Or we can use Logminer also OR
    SELECT * FROM USER_OBJECTS where object_type = 'TABLE' order by LAST_DDL_TIME desc;
    OR
    is there any other options are available ?
    Thanks in advance,
    Pal

    Something from asktom might help
    tkyte@TKYTE816> create or replace trigger ddl_trigger
    2 after create or alter or drop on SCHEMA
    3 declare
    4 l_sysevent varchar2(25);
    5 l_extra varchar2(4000);
    6 begin
    7 select ora_sysevent into l_sysevent from dual;
    8
    9 if ( l_sysevent in ('DROP','CREATE') )
    10 then
    11 if l_sysevent = 'CREATE'
    12 then
    13 begin
    14 select 'storage ( initial ' || initial_extent ||
    15 ' next ' || next_extent || ' .... )'
    into l_extra
    16 from all_tables
    where table_name = ora_dict_obj_name
    17 and owner = user;
    18 exception
    19 when no_data_found then null;
    20 end;
    21 end if;
    22
    23 insert into log
    24 select ora_sysevent, ora_dict_obj_owner,
    25 ora_dict_obj_name, l_extra
    26 from dual;
    27 elsif ( l_sysevent = 'ALTER' )
    28 then
    29 insert into log
    30 select ora_sysevent, ora_dict_obj_owner,
    31 ora_dict_obj_name, sql_text
    32 from v$open_cursor
    33 where upper(sql_text) like 'ALTER%' ||
    34 ora_dict_obj_name || '%'
    35 and sid = ( select sid
    36 from v$session
    37 where audsid=userenv('sessionid') );
    38 end if;
    39 end;
    40 /

  • Generate DDL - change is a new column and I want to generate a alter table

    Morning all,
    I have searched and looked all over the data modeler and I cannot find this option ... yet I did find it easily in Designer.
    I hope you can help me.
    SQL Developer Data Modeler v3.0.0.665.
    I have added a new column to a table and when I generate the DDL I would like it to be an alter table add column rather than a create table.
    This feature is in Designer so I would think it would be in data modeler.
    Just incase my description is not clear here are the high level steps so it is clear.
    1. create the logical model
    2. create the relational from the logical.
    3. create the physical from the relational.
    4. generate DDL and run in database. At this point I go to production with my system and all is well.
    5. At this point we have an enhancement request. For the model it will be a new column in a table.
    6. update logical model.
    7. update relational from logical
    8. update physical from relational
    9. generate DDL. Here I would like to have the generate be aware the it needs only to generate an alter table add column and not create the table.
    This is something I do alot as all my models are in production. I cannot find how to do this step of getting data modeler to generate the alter.
    Designer does this exceptionally well.
    Quite often it is more than a single column. The changes can be many and made over time and at the time of generating the DDL you may not recall every single change you made. To have the tool discover those changes for you and generate the appropriate DDL is a feature I regard as very high.
    I hope this is clear and you can help me.
    Cheers
    Chris ....

    Hi Chris,
    you need to compare your model against database - import from database into same relational model and use "swap target" option - in this case "alter statements" against database will be generated.
    You can look at demonstrations here http://www.oracle.com/technetwork/developer-tools/datamodeler/demonstrations-224554.html
    Probably this particular one will be most helpful http://download.oracle.com/otn_hosted_doc/sqldev/importddl/importddl.html
    Soon or later your changes will require table to be recreated and you'll need to backup your data - you can consider usage of "Advanced DDL" option - script will be generated that will unload the content of your table (including LOBs) to file system accessible from database and restore it after changes. Well don't try it directly on production system :).
    Philip

  • DDL trigger to Capture the DDL changes on one database and applies to child level databases in same server.

    Hi friends,
            I need to create one DDL trigger to Capture the all DDL modification on parent database and applies those changes to the underlying (child) databases  in my project.
            Can anyone help me out in this,how to track the changes and applies to the child level databases?
    Thanks in Advance.

    Use Visual Studio Data Tools or
    Red Gate Compare.

  • How to find the user whom you made any DDL changes in my pl/sql objects

    Hi,
    I am having oracle 10G set up in my server.Somebody did some DDL activities in my PL/SQL objects and it become invalidated, I want to find which oracle user did a modification.
    Any one can help me regarding this..
    Thanks in advance
    Karthik...

    Unless you had auditing turned on or DDL triggers in place, you will not be able to tell after the fact.

  • Replicate Table DDL Changes to other schema..!!

    Hello All,
    I have two schemas SCH1 and SCH2
    SCH1 : TBL1
    SCH2 : A_TBL1
    Our requirement is whenever we create a table in SCH1 for Eg TBL2 there should be some code which will create A_TBL2 table automatically in SCH2.. Both TBL2 and A_TBL2 should have same structure.. This functionality i can manage using "ON CREATE OR ALTER" Trigger..
    But now if in SCH1 we do ALTER TABLE... on TBL2.. same changes has to be applied to A_TBL2 in SCH2.. I have no idea how i could manage this ALTER TABLE change..
    Please any help would be greatly appreciated..
    Here's the trigger i did for create table..
    create or replace
    TRIGGER DDL_APPLY_TO_SCH2
    AFTER ALTER OR CREATE ON SCH1.SCHEMA
    DECLARE
    VSTR  VARCHAR2(255);
    VJOB NUMBER;
    BEGIN
            IF  ORA_DICT_OBJ_TYPE = 'TABLE' THEN
            VSTR := 'EXECUTE IMMEDIATE ''CREATE TABLE SCH2.A_'||ORA_DICT_OBJ_NAME||' AS SELECT * FROM '||ORA_DICT_OBJ_NAME||''';';
            DBMS_OUTPUT.PUT_LINE(VSTR);
            DBMS_JOB.SUBMIT( VJOB, VSTR );       
            END IF;
    END;Thanks - HP

    Ora_User_2 wrote:
    Hello All,
    I have two schemas SCH1 and SCH2
    SCH1 : TBL1
    SCH2 : A_TBL1
    Our requirement is whenever we create a table in SCH1 for Eg TBL2 there should be some code which will create A_TBL2 table automatically in SCH2.. Both TBL2 and A_TBL2 should have same structure.. This functionality i can manage using "ON CREATE OR ALTER" Trigger..
    But now if in SCH1 we do ALTER TABLE... on TBL2.. same changes has to be applied to A_TBL2 in SCH2.. I have no idea how i could manage this ALTER TABLE change..
    Please any help would be greatly appreciated..
    Here's the trigger i did for create table..
    create or replace
    TRIGGER DDL_APPLY_TO_SCH2
    AFTER ALTER OR CREATE ON SCH1.SCHEMA
    DECLARE
    VSTR  VARCHAR2(255);
    VJOB NUMBER;
    BEGIN
    IF  ORA_DICT_OBJ_TYPE = 'TABLE' THEN
    VSTR := 'EXECUTE IMMEDIATE ''CREATE TABLE SCH2.A_'||ORA_DICT_OBJ_NAME||' AS SELECT * FROM '||ORA_DICT_OBJ_NAME||''';';
    DBMS_OUTPUT.PUT_LINE(VSTR);
    DBMS_JOB.SUBMIT( VJOB, VSTR );       
    END IF;
    END;Thanks - HPThis is not a good idea but you can use that,first you can check if object(table) exits then you can drop this table and can create again using VSTR := 'EXECUTE IMMEDIATE ''CREATE TABLE SCH2.A_'||ORA_DICT_OBJ_NAME||' AS SELECT FROM '||ORA_DICT_OBJ_NAME||''';'* it means trigger will fire after alter event du to there will not any problem,so first check if this table exits then drop and create again ,so you will get changed table.

  • DDL Related Changes

    We have an application published. We had to increase the size of a varchar field in one of the tables, so we also changed the "create table" statement for the snapshot. We forced full resyncs for all of our users. (Note: this is a win32 offline app that utilizes msync for syncing) The full resync did not make the change.
    In a test scenario, we manually deleted the local database and then resynced and then the table changes were made to the local client database just because of the simple fact that the tables were recreated.
    This is great and all, and we could get by this way to get the changes reflected locally...but there is an issue where some of the Snapshot's tables in MOBILEADMIN schema...for example CFM$WTGPI_10065 still has the varchar2 field at its original size. I'm sure this is the same for other snapshot tables as well. I'm imagining that if we were to now allow users to enter data that was longer than the original field length, then the apply phase will fail or the sync will not succeed at all. But, at the same time the oracle lite database on the mobile device will allow the insert to succeed.
    So, my question is, is there a way to reflect the DDL changes to an application snapshot another way, for example maybe through the Consolidator API? Or, will the application have to be completely removed and then published?
    Thanks.
    Lee

    Sorry, forgot to update this thread with our results.
    Gary, your information was helpful, but didn't fully resolve the issue.
    The consolidator API offers a alterPublicationItem function to change a publication and it's snapshot realtime. This seems the only way to trigger a local DDL change for the Mobile user at the time of sync, without doing a forced resync with the files deleted.
    Things we tried:
    1) Made snapshot changes and republished with wtgpack (did not work)
    2) After step 1, deleted local DB on laptop and did full resync (this semi worked, but did not update the internal mobileadmin tables...for example CFM$WTGPI_10065
    3) Used alterPublicationItem through Consolidator API (Changed everything, internal tables and even local laptop tables)
    Here is the Consolidator API code we wrote to achieve this:
    import java.sql.SQLException;
    import java.sql.*;
    import oracle.lite.sync.Consolidator;
    import oracle.mobile.admin.ResourceManager;
    class alterPublication
    public static void main(String[] args) throws Throwable
    try
    System.out.println("Connecting as Mobileadmin User ");
    oracle.mobile.admin.ResourceManager.openConnection("mobileadmin", "<PASSWORD>","jdbc:oracle:thin:@<IP ADDRESS>:1521:<SID>");
    System.out.println("Connected");
    String snapshot_select;
    String pubName;
    snapshot_select = "SELECT VISIT_ID, REASON, REASON_DETAIL FROM SLSPRSN_VISIT_REASONS";
    pubName = "WTGPI_10065";
    Consolidator.AlterPublicationItem(pubName, snapshot_select);
    System.out.println("Finished");
    oracle.mobile.admin.ResourceManager.closeConnection();
    System.out.println("Close Connection");
    System.out.println("Done");
    catch (Throwable e)
    System.out.println("Something went wrong...");
    System.out.println(e.getMessage());
    }

  • How do I generate DDL for changes in model

    Hi
    I'm using SQL Developer 4 to mantain my logical and physical diagramas. I know how to generate the diagram and generate DDL. But how do I only generate the DDL changes?
    Regards,
    Néstor Boscán

    Basically you want to compare the model with the data dictionary, see the big arrow buttons up on the main toolbar.
    In depth answer here:
    http://www.thatjeffsmith.com/archive/2012/03/diffs-and-alter-scripts-via-oracle-sql-developer-data-modeler/

  • GoldenGate - Oracle to MSSQL - handling DDL replication abends on replicat

    Sorry for the cross-post. I clearly failed to actually read the "there's a GoldenGate forum" sticky...
    Hello -
    Very much a GoldenGate noob here, so please excuse me if I fumble on the terms / explanations - still learning.
    We've recently been lucky enough to have a GoldenGate pump put into our campus environment to support our data warehouse. We don't manage the pump or source systems - just the target.
    Pump: GoldenGate 11.1.1.1
    Source: Oracle 11g
    Target: MSSQL 2008R2
    ~5,000 tables of hugely varying sizes
    The extract is apparently configured to push DDL changes, which is clearly not going to work with a MSSQL 2008 R2 target. We're getting abend messages on the replicat and I'd like to know if we can bypass them on the replicat or need to ask that the extract process be modified to exclude DDL operations.
    The replicat error logs show exception:
    OGG-00453: DDL Replication is not supported for this database
    On the replicat I've tried including:
    DDL EXCLUDE ALL
    DDLERROR DEFAULT DISCARD (or DDLERROR DEFAULT IGNORE - neither let the process continue)
    The replicat just abends with the same OGG-00453 exception.
    My question: Can I gracefully handle these abends on the replicat? Or do I need to request the extract be updated with "DDL EXCLUDE ALL." Ideally, we can handle this on the replicat - I'm trying to be considerate of the main GoldenGate admin's time and also avoid any disruption of the extract.
    Any direction / info / ideas much appreciated.
    Thank you,
    Eric

    924681 wrote:
    Sorry for the cross-post. I clearly failed to actually read the "there's a GoldenGate forum" sticky...
    Hello -
    Very much a GoldenGate noob here, so please excuse me if I fumble on the terms / explanations - still learning.
    We've recently been lucky enough to have a GoldenGate pump put into our campus environment to support our data warehouse. We don't manage the pump or source systems - just the target.
    Pump: GoldenGate 11.1.1.1
    Source: Oracle 11g
    Target: MSSQL 2008R2
    ~5,000 tables of hugely varying sizes
    The extract is apparently configured to push DDL changes, which is clearly not going to work with a MSSQL 2008 R2 target. We're getting abend messages on the replicat and I'd like to know if we can bypass them on the replicat or need to ask that the extract process be modified to exclude DDL operations.
    The replicat error logs show exception:
    OGG-00453: DDL Replication is not supported for this database
    On the replicat I've tried including:
    DDL EXCLUDE ALL
    DDLERROR DEFAULT DISCARD (or DDLERROR DEFAULT IGNORE - neither let the process continue)
    The replicat just abends with the same OGG-00453 exception.
    My question: Can I gracefully handle these abends on the replicat? Or do I need to request the extract be updated with "DDL EXCLUDE ALL." Ideally, we can handle this on the replicat - I'm trying to be considerate of the main GoldenGate admin's time and also avoid any disruption of the extract.
    Any direction / info / ideas much appreciated.
    Thank you,
    EricI find strange that DDLERROR DEFAULT IGNORE does not work, are you sure you placed it properly? did you restarted the replicats after doing the change?
    Why dont you try specifying the error like:
    DDLERROR <error> IGNORE

  • Applying Streams changes to a different schema

    We're trying to setup a Streams environment between two DBs 9.2.0.3.
    We have two different schemas, one in each DB, called TEST_HQ and TEST_CO, and both of them contain the same objects, tables, and procedures. What we're interested in is to apply DML/DDL changes to the destination DB even if the schema has a different name, but same structure.
    Right now CAPTURE and PROPAGATE processes are working fine, while the apply process is unable to perform any change. We've also tried creating a new schema in the destination DB called TEST_HQ, just like in the source schema, adding synonyms to the real tables contained within TEST_CO, but this is hitting ORA-23416 'missing primary key'.
    Any help or hint will be greatly appreciated!
    Thanks in advance!
    Max

    Thanks for your reply!
    Actually there is a primary key, but Streams seems to ignore it. I've also tried the SET_KEY_COLUMNS way plus supplementary logging on the source DB, but this didn't help at all. I think this is happenening because the tables are not in the TEST_HQ schema, there are only synonyms to the real tables contained in TEST_CO.
    Is there any other easy way to get Streams working between two different schemas?
    Thanks again for your help!!
    Max

  • PL Package to sync ddl between two schemas

    About a year or so ago I came across a website (can't remember) that indicated that in 11g there is a package that would automatically update a target schema from a source schema to make it structurally the same (ddl). I've been unable to find such a package and want to know if anyone else out there knows if such a package exists. We are writing some software that would push the ddl changes made in a dev schema to test and then prod without the need to maintain scripts. We would want it to work similar to the way jDeveloper works with database changes made to a model. We could write it from scratch, but didn't want to go that route if there was something available from oracle that we could leverage.
    Thanks in advance for any insight.

    Never used it but probably you are talking about DBMS_COMPARISON?!
    The aforementioned document further links you to Comparing and Converging Data.

Maybe you are looking for

  • How do I get more than 3 episodes to display in my podcast feed?

    Hi there! I'm having trouble displaying more than 3 episodes in my iTunes podcast feed. It seems like every time I add a new one, the 4th one drops off. How can I fix this? Podcast: Go Fire Yourself Feedburner feed: http://feeds.feedburner.com/gofire

  • Safari not allowing access to a secure link w/in a secure website

    I've been banking on-line with a distant U.S. bank for several years using Safari and I've had no problem. Just recently, the bank upgraded their on-line security. To regain access I had to contact them and be reissued a new password. I did that and

  • PO offline approval

    Dear SRM Experts, We have offline approval configuration set up and i can say the it is working fine. My SC are getting approved , rejected  from mail. Now i would expect the same to work for PO approval , we have PO chnage approval  and any  chnage

  • Using iTouch in my car

    I used to listen to music using the usb plug in my car, but since I upgraded to version 3.1.1, it doesn't work anymore : "device not recongnized". Anyone can help ?

  • BE6000 and SIP trunk

    Hello! Now we use Cisco IP telephony based on Cisco CME. Cisco CME is installed on 2821 router with IOS C2800NM-ADVENTERPRISEK9-M, Version 12.4(24)T7. A SIP trunk is configured between CME router and telephony provider. We plan to upgrade our telepho