EJB, Moving Target DB, Abstract Schema

I apologize in advance for seeming clueless. My explanation is this: There is no money. I have inexperience staff. I've been away from building architectures too long to be specific. I can't buy a contractor. I need some advice.
We are converting many Access applications to Java/J2EE/AnyRelationalDB The way we have planned to approach this is to divide the DBs into various classes, say Personnel records, Vehicles, ...so on). These DBs will be moving targets that will change as we are able to discover Access applications that add/change features to whatever class of DB we're working with at the moment.
My goal is to eliminate changing each and every App everytime some DB parameter changes (DBMS, changed attribute, ..., etc). I think EBJ/abstract schemas will let me to get a generic view of the DB and insulate the App from the very real possibility of changing DB parameters.
I need some help verifying this or pointing me in a better direction.
Thanks for your help,
Bob

I think your best option is to implement CMP entity beans with a facade of services (business logic) that access the beans as tables in a DB. The only advantage of doing this will be the DB vendor independence and transparency because you define static queries in a declarative way.
I don't quite understand what you mean by DB parameters. But if you refer to changes to the database schema like new tables, new fields or changes to existing fields, you still need to align those changes with the attribuites in your application.
Cheers

Similar Messages

  • Moving Target DB and Abstract Schema

    I apologize in advance for seeming clueless. My explanation is this: There is no money. I have inexperience staff. I've been away from building architectures too long to be specific. I can't buy a contractor. I need some advice.
    We are converting many Access applications to Java/J2EE/AnyRerelationalDB The way we have planned to approach this is to divide the DBs into various classes, say Personnel records, Vehicles, ...so on). These DBs will be moving targets that will change as we are able to discover Access applications that add/change features to whatever class of DB we're working with at the moment.
    My goal is to eliminate changing each and every App everytime some DB parameter changes (DBMS, changed attribute, ..., etc). I think EBJ/abstract schemas will let me to get a generic view of the DB and insulate the App from the very real possibility of changing DB parameters.
    I need some help verifying this or pointing me in a better direction.
    Thanks for your help,
    Bob

    I apologize in advance for seeming clueless. My
    explanation is this: There is no money. I have
    inexperience staff. I've been away from building
    architectures too long to be specific. I can't buy a
    contractor. I need some advice.
    We are converting many Access applications to
    Java/J2EE/AnyRerelationalDB The way we have planned
    to approach this is to divide the DBs into various
    classes, say Personnel records, Vehicles, ...so on).
    These DBs will be moving targets that will change as
    s we are able to discover Access applications that
    add/change features to whatever class of DB we're
    working with at the moment.My first advice is that the description of you team doesn't bode well for the success of the project described in the second.
    Let me frame it in another context to illuminate how dubious this sounds:
    I want to build house with curved glass walls, high vaulted ceilings perched on a steep hillside. There is no money. I have inexperienced staff. I've been away from building houses too long to be specific. I can't buy a contractor.
    My goal is to eliminate changing each and every App
    everytime some DB parameter changes (DBMS, changed
    attribute, ..., etc). I think EBJ/abstract schemas
    will let me to get a generic view of the DB and
    insulate the App from the very real possibility of
    changing DB parameters.If you use an EJB layer than supports XDoclet or other portable CMP, yes, it will do this. However, it's not a simple and it your table structure changes significantly, your EJB will not work autmatically. The fact of the matter is that EJB is pretty complex and requires a lot of esoteric knowledge. Many EBJ projects have failed or produced terrible results. If you don't have any very capable developer/designers and/or have no developers with solid EJB experience I would under no circustances attempt this. EBJ is often overkill anyway. The real point of EJB is to help with distributed computing, not to abstract away the DB schema.
    I simple approach that many people overlook is to use stored procedures. Stored procedures create a layer of abstraction between your code and the DB such that the DB can change without changing the code.

  • What is "abstract schema" in CMP 2.0

    I saw this word in the configuration dialog of J2EESDK 1.3. What's that?

    Abstract schema refers to the persistent fields and relationship fields of an entity bean which uses container-managed persistence (CMP). You specify an identifier to refer to a bean's abstract schema. Then you can compose EJB QL statements for custom finder & select methods. So, if you have a student entity bean, let's say its abstract schema identifier is studentEJB. To find all students with CMP field 'name' equal to parameter 1, your EJB QL query would be:
    select object (s) from studentEJB s
    where s.name = ?1
    More information can be found in Chapter 11 of EJB 2.0 spec (really, it's not bad). Good luck.

  • What is the impact on an Exchange server when moving FSMO role and schema master into another DC?

    What is the impact on an Exchange server when moving FSMO role and schema master into another DC? What do we have to do on exchange after performing a such task?
    I had 1 DC (Windows server 2008 R2), 1 Exchange 2010 SP3. I install a new DC (Windows server 2008 R2). I then move all the FSMO role including the schema master role into the NEW DC. I check to be sure that the new DC is a GC as well.
    I shutdown the old DC and my Exchange server was not working properly and specially Exchange Management Shell. It start working again after I turn up the older DC.
    I am wondering why Exchange did not recognize the new DC, even after moving all the roles on it.
    I am looking to hearing from you guys.
    Thanks a lot

    if you only have 1 DC, you might need to cycle the AD Topology service after shutting the one down.
    Also, take a look in the windows logs, there should be an event where Exchange goes to discover Domain Controllers, make sure both are listed there.  You can probably force that by cycling AD topology (this will take all services down so be careful
    when you do it)
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread

  • Time Machine - Moving Target

    I am running a early 2008 MacBook Pro 15" and have succesfully moved to Mountain Lion on the day of release. I run two Time Machines, one at work and one at home. The one at work has filled up and I opted to have Time Machine make space by deleting old backups. That is fine until fairly recently. Time Machine now has moving targets and my backups fail. By moving targets, for example, after preparation, Time Machine indicates, say, 3.5 Gb are to be backed up. It seems to go ok for a while, but as it nears the end, the end size keeps increasing so it never finishes and eventually, around 65 Gb of 72 Gb it fails. So I did a manual start of the backup and watched it. Next time around it wants to backup 3.55 Gb. It fails when both the target and have both crept up to 10 Gb. This is not happening on the home Time Machine disk, or at least not yet. Is my only option to erase the entire disk and restart Time Machine on it? Any other solutions? I still have one Time Machine, but I like to keep two (don't like single points of failure).

    Thanks, but that is not the case. I usually do a couple things in Windows then I shut down the virtual machine and I usually turn off Time Machine while I am in the VM. I do this because in the past, when using the VM and a Time Machine backup is triggered, it makes the VM rather unusable. Still this doesn't explain why Time Machine works in one location and not the other, because I do perform the same tasks at home and at work.
    I've had my internal HDD die before and a full Time Machine backup saved me when Apple replaced my HDD. So after waiting overnight for the restore, everything including the VM virtual disk was restored in one step. I have since upgraded my HDD to an SSD so Time Machine saved me a bunch of grief when I replaced a perfectly working HDD with the SSD. I like Time Machine so much that I found a couple of similar products for my wife's Windows machine and for my mother's Windows machine.
    I essentially solved the problem the low tech way, I reformatted the disk and let Time Machine do the initial backup. Since the "old" backups were under Lion, and I am on Mountain Lion, I don't really need the old backups. I did a full verify, via Disk Utility before and after reformatting, the disk was fine with zero errors.

  • Moving data between two schemas

    I need to move data between to schemas. I have a created packaged code to accomplish this. The problem is the execution time. When running the insert statements from the source schema to insert data into the target schema, it takes considerably longer to complete the statement than if I copied the tables from the source schema into the target schema and executed the same statement in the target schema. Any insight as to why this might be?
    Also all data resides on the same physical disk, running version 10g on a W2K server.
    Thanks in advance
    Here is a sample of one of the insert statements:
    INSERT INTO target_table(tt_id, tt_disp, tt_date, tt_emp_1, tt_emp_2, tt_emp_3)
    SELECT src_tab.src_id,
    src_tab.scr_disp,
    src_tab.scr_date,
    src_tab.scr_emp_1,
    src_tab.scr_emp_2,
    src_tab.scr_emp_3
    FROM
    (SELECT
    row_number() over(
    ORDER BY SUBSTR(fn_cil_sort_format(SUBSTR(src_cil,
    1, 8)), 1, 4), SUBSTR(src_cil, 4, 8)) AS src_id,
    scr_disp,
    fn_date_format(date_time) AS scr_date,
    v_convert AS scr_emp_1,
    v_convert AS scr_emp_2,
    v_convert AS scr_emp_3
    FROM source_table
    ORDER BY SUBSTR(fn_sort_format(SUBSTR(src_cil, 1, 8)), 1, 4),
    SUBSTR(src_cil, 4, 8)) src_tab
    WHERE scr_disp IS NOT NULL;

    In addition to the above post, you should create the table initially with NOLOGGING. create table as select has the ability to bypass logging. This should increase performance very much. No log writes will have to be taking place.
    Lee

  • Moving objects from one schema to another

    Hi,
    again as part of my learning exercise, I was trying to move objects from a schema in 10GR2 to 11GR2 database.
    I am able to use exp and imp commands to achieve this objective -
    On source database -
    $> exp userid=scott/tiger owner=(schemaname) file=data.dmp statistics=none
    Copied the dump file on target server and then
    $> imp userid=scott/tiger file=data.dmp fromuser=uname touser=scott
    And this works fine.
    What I am not able to understand is how I can achieve this by using datapump instead?
    Your help is much appreciated.
    Thanks.

    Thanks.
    Worked perfectly.
    This is what I did (in case someone is in my position)-
    On source database-
    expdp scott/tiger schemas=<source schema>directory=DATA_PUMP_DIR dumpfile=data.dmp job_name=some_job
    Copied dump file to target server.
    On target database-
    $> impdp scott/tiger directory=DATA_PUMP_DIR dumpfile=data.dmp remap_schema=<source schema>:<target schema> job_name=imp_kob

  • EJB 3 to Multiple Identical-Schema Databases

    I'm about to develop a JEE app, hopefully using EJB 3, in which a user will connect to one of 250+ databases. Yes it's a bad design, but it is what I inherited. All databases have the same schema, pl/sql, etc. The database which the user needs to log into is passed in with the login information.
    Does anyone have a good suggestion on how this might be done?
    Thanks in advance.

    Hi Tony,
    Again, apologies for the delay... something to do with Transatlantic Timezones! :-)
    You asked: "So let me CONFIRM.. You have 4 developers working on 4 DIFFERENT copies of the SAME application?"
    Answer: nope. I have (say) four Distinct schemas, whose contents are identical tables and PL/SQL but the records in those tables are essentially distinct from the records in any of the other schema's tables.
    So four identical photocopies of the same A4 tax form (daft analogy), but the numbers written in to the form's boxes are all different.
    With due respect, I do appreciate that this might not be the best way for four developers to work on the same project, but each developer is using the same application but their data is different. And the easy way to make
    sure that one developer doesn't screw it up royally for the others is to give them separate schemas.
    yes, when the application gets an increment on its version we have four schemas to upgrade. Small bananas to pay for the peace of mind that separate schemas gives for three to six months at a time.
    Mostly I'm thinking of using Apex in a "read-only" context: showing reports on the data in the schema's tables. The best way to go about this is my query: some sort of dynamic view?
    So please don't fret on the design - it is what it is for good reasons. How can I use one-Apex-per-database to satisfy one-report-per-schema is my quest.
    Regards
    Mungo

  • Moving targets, metrics, etc to a new repository

    I have OMS 10.2.0.3 configured to use a repository in a 10.2.0.2 database. I need to move the information in that repository to another database as there are some problems with it and it can not be upgraded to 10.2.0.4. An exp/imp will not work.
    I installed a new OMS 10.2.0.4 on a 11.1.0.6 database. I now need to get my old information to the new one:
    ** targets, users, roles, groups, metrics, templates, beacons, jobs, credentials, notification rules, etc
    I plan on pointing agents to the new OMS for each host, which would get the targets. I would then need to update the metrics and create the other objects listed above.
    1. Does anyone know of a better method? - upgrade and clone are both out.
    2. Are there APIs or SQLs to get the information of what the details of the current items are so I can get a list of what to manually enter in the new repository? I put together some SQLs to get some of the info, but I don't have most of them.
    For example, I can get a list of:
    all jobs scheduled : MGMT_JOB_SCHEDULE, MGMT_JOB, MGMT_JOB_TARGET, MGMT_TARGETS
    groups : MGMT$GROUP_MEMBERS
    I'd like the way to get users, roles, metrics, templates, beacons, credentials, notification rules, etc.
    3. Are there APIs or SQLs to create these in the new repository?
    Thanks,
    Gary

    Unfortunately, I see that my script gathers most information for jobs, but not the actual job steps itself. That is in MGMT_JOB_LARGE_PARAMS, but I haven't seen a way to join that yet.
    I'd still liek some scripts.
    Her is teh incomplete one I have for jobs.
    SELECT j.job_owner, j.job_name, t.target_name, DECODE(s.frequency_code,
    1, 'Once', 2, 'Interval', 3, 'Daily', 4, 'Day of Week',
    5, 'Day of Month', 6, 'Day of Year', s.frequency_code) "FREQUENCY",
    s.start_time, s.interval, -- months,
    s.days, j.job_description, j.job_status
    FROM MGMT_JOB_SCHEDULE s, MGMT_JOB j, MGMT_JOB_TARGET jt, MGMT_TARGETS t
    WHERE s.schedule_id=j.schedule_id AND frequency_code != 0
    AND j.expired = 0 AND j.is_library=0 -- and job_name like '%BACKUP%'
    AND jt.job_id = j.job_id AND jt.target_guid=t.target_guid AND jt.execution_id=HEXTORAW('0000000000000000')
    ORDER BY s.execution_hours;

  • Moving target db to new RMAN catalog.

    Using oracle 10g rel2 on RedHat 4.
    Scenario: I have RmanCatalogA (with many target db's), I have new RmanCatalogB (with different target db's)
    What's the best way to move a target database X from RmanCatalogA to the new RmanCatalogB. I did not find any matching / ideal scenario in any Oracle documentation. I want my old and new backups to be valid and restorable.
    Not comfortable un-registering my db X from RmanCatalogA and registering to RmanCatalogB. Not 100% if I will able to restore from backups (when it was connected to old catalog).
    Any suggestion or directions?

    Your database controlfile will contain the backup information as well - just register the database with the new catalog and then run some queries against it and you should see everything intact, then unregister from the old catalog.
    Thanks
    Paul

  • Best Mic for Moving Target

    I am doing remote podcast interviews. I have two small desktop stands, the recorder, but need a couple of durable mics. I was considering 2 SM57s or 58s, but thought I would ask everyone here for a little advice first.
    Price range up to $200 each.
    Needs to endure road conditions.
    Needs to be able to record voice only.
    Users are not pros, so will tend to move around (sit back etc)
    Noisy background sometimes.
    Let me know what would be the best mic for these parameters.
    Thanks!

    That was very helpful. Mostly because it just confirms the use of the 58s. Just wanted to make sure I did not miss another that might be a better choice. Thanks WH!
    You're welcome. Glad to add my 2.7 cents worth (inflation).
    Many big podcasters have gone with headset or lavaliere mics.
    A good headset mic can work well for an interview (it will stay the same distance away from the speaker).
    As for a lavaliere, I know they are used on TV a lot.
    However, for amateur interviewees, it may not be a good choice. My advice is to avoid the temptation to use lavaliere microphones, at least for interview recording. Clipped to a lapel, or hanging around the neck, the lav mic is in a less-than-ideal position for good voice pick-up. Additionally, if the subject is moving around, clothes and cables will likely add unacceptable noise.
    WH

  • Moving tables from one schema to another

    Hi
    I have 9 tables in Schema_1 that have already been populated with data.
    I would like to remove these tables from Schema_1 and place them into Schema_2
    These tables also have indexes attached to them.
    Thanks

    I don't think there's a "move" command as such, though someone may prove me wrong.
    You could "grant select" on the relevant tables to user Schema_2, then log on as the second user and perform "create table ... as select * from Schema_1.....". Then, create the indexes manaually. After all this, drop the old tables.

  • Why is iTunes Match such a moving target on matching things?

    I have been playing with iTunes Match for the last several months.   I recently deleted everything out and re-matche/uploaded everything.   I have two issues with the service.
    1.  The service relys to much on the whole time track to match it.  I have been using the trick of shortening tracks to what they are in iTunes to get tracks to match.   However this does not always work.   I am picky and I like to have everything be the same as far as a consistent listening experience. 
    2.  This leads to problem number 2, why is it it cannot match tracks that it previously matched.   Since the putting everything back up into match, I currently have 138 tracks that are listed as Matched AAC files but are uploaded to the service. 
    These problems wouldn't be so bad but it's frustrating when you have 10 tracks out of 36 match with no reason or explanation.   It feels like i'm having to game the system to get my stuff to match up.
    If anyone has a suggestoin why things are missing contantly or how to get a higher match rate, I'd geatly like to hear it.

    Michael Allbritton wrote:
    JiminMissouri wrote:
    I did some experimentation this morning with iTunes Match album art that wasn't appearing for me and have had some good results.  To begin, it's clear iTunes Match has some size limitations as to what it will accept for album art.  If you have 600x600, 45k images, it won't display them.  Re-size to 300x330 25k and if all else is correct, they should show up.  My guess is this is by design.  The core audience for iTunes match are people accessing it while on the go.  Less bandwidth, faster downloads, more space for music, etc.
    To follow up on your answer a bit, Jim, I use album art that is 600x600 @ 72 DPI in much of my custom applied album art and it does show up on my iPad via iTunes Match. I've found that the best resolutions for album art are either 300x300 or 600x600 at 72 DPI.
    600 x 600 works?  Good to know. Admittedly, I didn't try to nail down whether it was dpi or file size that mattered most, just took things down to 300 x 300 and around 25k and that did it. Perhaps it's file size that's key here.  I'll go as small as I can both ways though, provided the image looks OK to me via AppleTV on my 52" Samsung. 300x300 seemsOK to these old eyes and I'm hoping smaller will improve load times for the artwork.
    I can guess why Apple decided to put some limitations on art, rather like they have with the 200Mb track limit - which drops a lot if the song gets converted to AAC on the fly in iTunes before it's uploaded.  Could they have implemented a similar re-size/resample for artwork? Sure, and perhaps they will one day, but for now I'm just glad to have a pretty good handle on how to fix it myself.

  • ZISD sector a moving target with ZEN 7?

    I participated in a thread on ENGL's forums concernign their ActiveX
    control for reading the ZISD data from within Windows using VBScript. I
    asked them if it was OK to use the v1.0 of their ActiveX control with
    ZEN 6.5 and ZEN 7.0 (it only listed ZEN 3.x-4.0).
    Heath said they plan on releasing a ZEN 6.5/7.0 compatible version in
    January 2006, which is good.
    I asked him why they needed to release a new version, and he said
    something along the lines of with ZEN 7, the ZISD isn't always located
    at Sector 6 like it was in older versions of ZEN Imaging. Here is what
    he said:
    "The current release of ZisdCtrl supports ZISD @ sector 6, with ZENworks
    7 ZISD may not necessarily be written at sector 6."
    Does anyone have any info on this kind of thing? I am interested in
    knowin gwhat some of these changes are and why they were made.
    I cannot find anything on Novell's site.

    The info from Heath is correct, it ZISD isn't alway's at sector 6
    The reason why is because als GRUB likes to use the first few sectors on the
    disk, and as such imaging had some problems with Linux Machines using GRUB.
    Ron
    "Jeremy Mlazovsky" <[email protected]> wrote in message
    news:t%gsf.1846$[email protected]..
    >I participated in a thread on ENGL's forums concernign their ActiveX
    >control for reading the ZISD data from within Windows using VBScript. I
    >asked them if it was OK to use the v1.0 of their ActiveX control with ZEN
    >6.5 and ZEN 7.0 (it only listed ZEN 3.x-4.0).
    >
    > Heath said they plan on releasing a ZEN 6.5/7.0 compatible version in
    > January 2006, which is good.
    >
    > I asked him why they needed to release a new version, and he said
    > something along the lines of with ZEN 7, the ZISD isn't always located at
    > Sector 6 like it was in older versions of ZEN Imaging. Here is what he
    > said:
    >
    > "The current release of ZisdCtrl supports ZISD @ sector 6, with ZENworks 7
    > ZISD may not necessarily be written at sector 6."
    >
    > Does anyone have any info on this kind of thing? I am interested in
    > knowin gwhat some of these changes are and why they were made.
    >
    > I cannot find anything on Novell's site.

  • Performance slows down when moving from stage to test schema within same instance with same database table and objects

    We have created a stage schema and tested application which is working fine when we are moving it to another schema for further testing ( This schema is created using same scripts which were used to create objects in staging schema) the performanc of application (Developed in .NET) slows down drastically
    Some of the store procedures we have checked at Databse/SQLdeveloper level are giving almost same performance but at Application level there is lot of difference
    Can you please help
    We are using Oracke 11g Database

    Are you using the Database Cloud Service?  You cannot create schemas in the Database Cloud Service, which makes me think you are not.  This forum is only for the Database Cloud Service.
    - Rick Greenwald

Maybe you are looking for