KODO Disgrace - Newgroup Quality

KODO Newsgroup Quality
We are planning to use KODO as the underlying persistence mechanism for an already implemented system. We have updated our persistency wrapper with the JPA compliant KODO since we wanted to be JPA compliant. Everything were fine till we started to use the wrarpper in the real system. We had some problems during the migration.
That shall be disgrace. There is no reply to our posts even more than a month. Maybe this place is not to write such questions, maybe there is a bug, maybe we couldnt clearly explain the problem etc etc.. however our posts does not have even a single negative/positive reply. We are almost on the edge of giving up the decision to use KODO. If the problem is the money, we planned to use it with the end of a successful migration.
Congratulations dear KODO team, and thank you.
Siyamed

We are reading these forums.
Unfortunately, our response times have not been good because we had not set up a responsibility for creating responses. But we do care very much about the concerns raised here, and I am as disturbed as you to find that our Support is not hitting the mark. Rest assured, I’m taking action on this.
Marc Logemann does hit the nail on the head when he identifies these problems as stemming mainly from the transition. Kodo developers used to monitor these user groups, but we’ve assigned the Kodo developers to new product development. In their place, we are training a new Kodo Support Team, but it does take time to put infrastructure in place—which is why our responses… even this one… may seem sluggish in this transition phase. Once people are in place with the expertise needed, I think you’ll find our responses to be both swift and knowledgeable.
However, that isn’t an excuse. The truth is, it was never our intention to ignore developers, even during this transition period. We do still have support structures in place for Kodo. Our support team is not quite up to the level of the original Kodo engineers, it is true, but we expected them to meet the need during this transition.
I think the major disconnect here is that Kodo users have been used to receiving support in a different way than we anticipated, and in a different way than we are used to providing it. However, we should have done a better job of familiarizing long-time Kodo users like you with the processes BEA has in place for customers to raise and escalate concerns and get issue resolution. We should have been more prepared for the culture you already have in place.
Luckily, that is easily remedied. We are working to get everyone educated about our Support channels. We are also in the process of building a strong infrastructure to provide Kodo users with BEA’s normal level of comprehensive support. And we expect to be able to regularly review forums such as this one.
As for our support of Kodo as a stand-alone product: it is absolutely not in question. BEA’s resource commitment to Kodo is very strong. We do not intend to change the things that make Kodo most successful. I’ve copied and pasted a few bullet points below to indicate how we are working internally to bring Kodo support up to date.
Here are some of the commitments BEA is making to Kodo:
a.     More BEA Support engineers assigned to Kodo (and currently coming up to speed) than the size of the entire original SolarMetric engineering team prior to acquisition.
b.     Kodo Support engineers training in all regions worldwide. This enables us to provide business hour support including phone support in EMEA and APAC as well as in the Americas.
c.     24x7 Production Support, another first for Kodo customers. Previous SolarMetric support was email-only and on a 12x5 basis (10 AM – 10 PM EST).
d.     We have been focusing the expertise of the Kodo Engineers on product development: This year, the BEA Kodo team delivered a major release for EJB3 (Kodo 4.0) and released that technology to the open source community via Open JPA. We are also nearing the release for JDO2 (Kodo 4.1).
e.     Adding staff and additional trainings in preparation for future releases of Kodo.
I hope this answers some of the questions raised here. I encourage you to contact me if you have concerns about the support you are receiving from BEA.
Thanks for offering us feedback and for your support of Kodo.
Terry Clearkin
WW SVP, BEA Support

Similar Messages

  • T420 or for the matter all new Thinkpad products losing quality/ Flimsy

    Well once a proud owner of many thinkpads, starting with the first 386 version during my pre primary years, onto the t43 series, to the t60p series, to the t400 an t500 models  and now an owner of a t420. i must say that over the last 12 months the quality in comparison to the other models prior to this one shows as a disgrace to the fall of grace of a once well known quality brand the Thinkpad.
    Let me tell you, within the last 12 months of use i had my ejection mechanism fall out off place an non repairable on the cdrom , i was surprised at this. one of my battery lock in clips have come off, this is just through normal use, to and from work.My keys on the keyboard are getting how shoul i put it not responsive due... it as all come down to a cheap piece of crap for a Thinkpad T series model.
    On another note when i had to upgrade my memory modules, i was dumb founded by the number of screws in comparison to the previous models of the once great T series line. I could also say that having an electronics engineering background. having a look at the circuitry, the Quality control is almost non existant. I noticed NOTICABLE rework on the motherboard, misaligned IC chips, solder balls between the legs. ... Just surprised at the level of **bleep**iness that i have notice a once true quality product degrade to.
    IN ESSENCE I have gone back to my good old T60P bought a SSD hard drive for that. LENOVO THINKPADS once a CHOICE FOR ME. The t420 will most probably be the last that i will get. I look towards Dell Precision series.
    I hope LENOVO turns back and allow quality to be once again the norm for the Thinkpad T series.
    On another note, this is not only myself complaining about this. matter. I work for a well known telecommunications company within Australia. And they have made the shift to Dell Precisions due to the fact of low quality issues with the T series... Unfortunate

    The conventional laptop market isn't growing, so manufacturers are squeezing as much out of the components as possible. Shareholders won't stand for non-growth, so you're stuck with the trend.
    W520: i7-2720QM, Q2000M at 1080/688/1376, 21GB RAM, 500GB + 750GB HDD, FHD screen
    X61T: L7500, 3GB RAM, 500GB HDD, XGA screen, Ultrabase
    Y3P: 5Y70, 8GB RAM, 256GB SSD, QHD+ screen

  • My experience migrating Kodo 3.4 to 4.1

    Hello Stefan,
    I struggled with Kodo 4.0 and gave it up. Kodo 4.1 seems to be a much better
    release. I migrated my app in a day. First I managed to run it against 3.4
    metadata with some property file changes (migration docs are not very good
    and miss few things but may be if you use Kodo automated migration tools it
    will do for you what I was doing manually) . If you use lots of custom
    filed mappings (especially non-trivial mappings) allocate much more time for
    conversion - the whole thing has changed. I have not had chance to convert
    my mapping and had to bandate it with externalizers and other things for
    now. One thing you lose in kodo3.4 mode is ability to query by interface
    since now it must be explicetly.
    Couple of tips here
    - kodo.Log: kodo(DefaultLevel=INFO, SQL=TRACE...) kodo is no longer valid
    logger name
    - kodo.jdbc.DBDictionary: oracle(BatchLimit=30) BatchLimit=30 is no longer a
    valid option use
    kodo.jdbc.SQLFactory: (BatchLimit=50) instead
    - kodo.PersistentClasses= ... is no longer works use
    kodo.MetaDataFactory: kodo3(Types=....) in kodo3 mode or
    kodo.MetaDataFactory: (Types=...) in jdo2 mode
    - Any SQL with DATE column is no longer batched leading to 2-3 times
    performance drop. The decision swa made based on bugs in oracle 9 drivers in
    batching mode. If you have latest drivers (and database) from my experience
    you will not have any problems. So to reenable it you can register your own
    instance of AdvancedSQL (now a factored out part of DatabaseDictionary):
    kodo.jdbc.SQLFactory: (BatchLimit=50,
    AdvancedSQL=com.peacetech.jdo.kodo.kodo4.patch.OracleAdvancedSQL)
    where OracleAdvancedSQL could look like:
    public class OracleAdvancedSQL extends kodo.jdbc.sql.OracleAdvancedSQL {
    @Override public boolean canBatch(Column col) {
    switch (col.getType()) {
    case Types.DATE:
    return true;
    default:
    return super.canBatch(col);
    - I have not tested read performance much since I was concentrating on
    writes. But write performance even with batch enabled does not seems to be
    not up to 3.4 level I observed a consistent 30-40% decrease in performance
    while persisting large graph complex of fairly small objects. I ran 3.4 and
    4.1 versions side by side against a dedicated Oracle 10 server and noticed
    performance decrease of 30-40%
    SQL generated by both versions was very close if not identical (I only did
    spotcheck) but incase of INSERT you would not expect it any different anyway
    I tried profiling 4.1 version and found some significant hot spots but could
    not dechipher them in any reasonable time because of huge depth of stacks
    and lack of source code. I might try it again if I have time because
    performance is critical for us.
    - I have not tried any new/advanced features yet. including new mappings,
    detachment, data cache, quality of eager fetch so I can not give you any
    good feedback on that. At least I can say that this release worth trying -
    after migration my application worked as expected except for lower
    performance
    I also have to say I do not know how well automated migration of kodo 3.4
    metadata to jdo2 metadata works (if it exists) because I use my model driven
    code generator and I just developed JDO2 plugin for it and regenerated my
    metadata from the model (I did not have to gegenerate my java classes of
    course)
    Alex
    Then I created native JDO2 mappings and everything

    Denis,
    Could you email it to me please shurik at peacetech dot com
    Thanks
    Alex
    "Denis Sukhoroslov" <[email protected]> wrote in message
    news:[email protected]...
    Alex,
    The issue was in version 3.4.1. BEA has provided a patch, no new version.
    Denis.
    "Alex Roytman" <[email protected]> wrote in message
    news:[email protected]...
    Denis,
    In which version did you observe it and which version fixed it?
    Thank you
    Alex
    "Denis Sukhoroslov" <[email protected]> wrote in message
    news:[email protected]...
    I don't know, I didn't tried 4.1 yet. It is possible that this issue
    was'n exist in kodo 4.x at all.
    "Christiaan" <[email protected]> wrote in message
    news:[email protected]...
    Nice! Is it also solved for 4.1?
    regards,
    Christiaan
    "Denis Sukhoroslov" <[email protected]> wrote in message
    news:[email protected]...
    Finally, BEA has solved the issue I mentioned. Reading cached PCs
    which have embedded objects become much faster (about 10 times in my
    tests).
    Thank you very much to all why was involved in this job.
    Denis.
    "Denis Sukhoroslov" <[email protected]> wrote in message
    news:[email protected]...
    Hi Alex,
    I know about default-fetch-group, of course I marked these embedded
    fields properly. You're right, it is not a cache miss but an
    unnecessary fetch from DB. It's strange that nobody has found this
    before. I managed to create a stanalone test case and send it to BEA
    support. They agree that it is a bug, but still can't fix the issue.
    The test is quite small, so if anyone interested I can send it here.
    Denis.
    "Alex Roytman" <[email protected]> wrote in message
    news:[email protected]...
    Hi Denis,
    That's very strange. All custom fields such enums etc. are
    essentially mapped onto regular JDO mandated types. I use it all the
    time and have not observed this behavior but I might have missed it
    of course. I have a suspicion that what you are seeing is not cache
    misses but rather fetches outside of default fetch group. Keep in
    mind that Kodo does not fetch any custom field as part of default
    fetch group unless you explicetly specified it in your package.jdo
    So, try to mark all your custom mapped fields with
    default-fetch-group="true" and I suspect all your extra database
    selects will disspear.
    Read speed, inded, is always the critical part. I just have not had
    chance to play with 4.1 reads enough to say if it is faster or
    slower. There are more ways to optimize reads (various flavors of
    eager fetches, cutom optimized mapping of collections including
    embedding ...) but very few optimizations for updates
    Alex
    "Denis Sukhoroslov" <[email protected]> wrote in message
    news:[email protected]...
    Hi Alex.
    My question is out of this topic, but it looks like you may have an
    answer. BEA support did nothing for the last few months.
    We still use kodo 3.4.1. DB is Sybase 12.5.x. In our app we're very
    concerned on performance as well. But, we do have much more reads
    then writes. So, we're trying to cache as much as possible. Kodo
    distributed cache works quite good. At least, it presents better
    performance then Gemfire and Tangosol, on the same use cases. But
    we found a bug in its caching mechanism: when you have a persistent
    class and this class has an embedded attribute of some
    non-primitive type (like enum or just a simple complex type with
    one or two attributes in it) kodo bypasses cache and performs a
    select to DB each time. Have you seen this? Is it possible to solve
    via custom mapping, what do you think?
    Thanks. Denis.
    "Alex Roytman" <[email protected]> wrote in message
    news:[email protected]...
    Hello Stefan,
    I struggled with Kodo 4.0 and gave it up. Kodo 4.1 seems to be a
    much better release. I migrated my app in a day. First I managed
    to run it against 3.4 metadata with some property file changes
    (migration docs are not very good and miss few things but may be
    if you use Kodo automated migration tools it will do for you what
    I was doing manually) . If you use lots of custom filed mappings
    (especially non-trivial mappings) allocate much more time for
    conversion - the whole thing has changed. I have not had chance to
    convert my mapping and had to bandate it with externalizers and
    other things for now. One thing you lose in kodo3.4 mode is
    ability to query by interface since now it must be explicetly.
    Couple of tips here
    - kodo.Log: kodo(DefaultLevel=INFO, SQL=TRACE...) kodo is no
    longer valid logger name
    - kodo.jdbc.DBDictionary: oracle(BatchLimit=30) BatchLimit=30 is
    no longer a valid option use
    kodo.jdbc.SQLFactory: (BatchLimit=50) instead
    - kodo.PersistentClasses= ... is no longer works use
    kodo.MetaDataFactory: kodo3(Types=....) in kodo3 mode or
    kodo.MetaDataFactory: (Types=...) in jdo2 mode
    - Any SQL with DATE column is no longer batched leading to 2-3
    times performance drop. The decision swa made based on bugs in
    oracle 9 drivers in batching mode. If you have latest drivers (and
    database) from my experience you will not have any problems. So to
    reenable it you can register your own instance of AdvancedSQL (now
    a factored out part of DatabaseDictionary):
    kodo.jdbc.SQLFactory: (BatchLimit=50,
    AdvancedSQL=com.peacetech.jdo.kodo.kodo4.patch.OracleAdvancedSQL)
    where OracleAdvancedSQL could look like:
    public class OracleAdvancedSQL extends
    kodo.jdbc.sql.OracleAdvancedSQL {
    @Override public boolean canBatch(Column col) {
    switch (col.getType()) {
    case Types.DATE:
    return true;
    default:
    return super.canBatch(col);
    - I have not tested read performance much since I was
    concentrating on writes. But write performance even with batch
    enabled does not seems to be not up to 3.4 level I observed a
    consistent 30-40% decrease in performance while persisting large
    graph complex of fairly small objects. I ran 3.4 and 4.1 versions
    side by side against a dedicated Oracle 10 server and noticed
    performance decrease of 30-40%
    SQL generated by both versions was very close if not identical (I
    only did spotcheck) but incase of INSERT you would not expect it
    any different anyway :-)
    I tried profiling 4.1 version and found some significant hot spots
    but could not dechipher them in any reasonable time because of
    huge depth of stacks and lack of source code. I might try it
    again if I have time because performance is critical for us.
    - I have not tried any new/advanced features yet. including new
    mappings, detachment, data cache, quality of eager fetch so I can
    not give you any good feedback on that. At least I can say that
    this release worth trying - after migration my application worked
    as expected except for lower performance
    I also have to say I do not know how well automated migration of
    kodo 3.4 metadata to jdo2 metadata works (if it exists) because I
    use my model driven code generator and I just developed JDO2
    plugin for it and regenerated my metadata from the model (I did
    not have to gegenerate my java classes of course)
    Alex
    Then I created native JDO2 mappings and everything

  • !!  Open-source extension to Kodo released  !!

    Hello fellow Kodo users,
    You guys that are bold enough to want the cutting edge in Kodo software
    are my target audience here... (repost from main news group)
    I've been working with Kodo since it was created, and my employer has been
    generous enough to allow me to open-source my work. My goal was to make
    Kodo / JDO easy to use for the other developers on my team, without
    sacrificing any of the power of the JDO spec. The result is
    http://jstomp.sourceforge.net/. Here you will find a robust bytecode
    enhancer which makes working with Kodo a breeze. You think your code is
    clean now? Wait until you see what you can do with Stomp.
    We've been using this enhancer in production internally for over a year,
    with great results. We're using Stomp in an appserver (JBoss), connecting
    to multiple legacy databases, and everything is running smoothly. I
    convinced my employer to open this code up to the Kodo community, in the
    hope that one or two quality developers on this list will adopt this
    technology and contribute to it's success.
    So... if you are new to Kodo, forget this. If you've been using Kodo for
    a while, check this out. It rocks. I'm always open to questions /
    comments.
    Eric Lindauer
    [email protected]

    Sorry Romu, but this against the CoC and [Terms of use|http://www.sun.com/termsofuse.jsp#g2_1] here.

  • "No suitable driver" error using Kodo 3.1.0, MySQL Connector/J & eclipse 3.0M8

    Hi,
    I'm getting the following error when trying to use the mapping tool from
    within eclipse 3.0M8:
    <error>-An error occurred running MappingTool
    kodo.util.FatalDataStoreException: No suitable driver
    NestedThrowables:
    java.sql.SQLException: No suitable driver
    <info>-Done.
    I followed the instructions on using the eclipse plugin, including copying
    all jars from kodo's lib folder to the plugin folder, copying the MySQL
    Connector/J jar to the kodo plugin folder, adding all of those jars to the
    project classpath, and even added an entry to the plugin.xml file to
    include the MySQL Connector/J jar. If I remove the project reference to
    the MySQL Connector/J jar, the error changes to:
    <error>-An error occurred running MappingTool
    kodo.util.FatalDataStoreException: com.mysql.jdbc.Driver
    NestedThrowables:
    java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
    <info>-Done.
    This would imply that adding a <library> entry for the MySQL Connector/J
    jar in my plugin.xml does not have any effect; if I remove it, I see the
    same error as above. If I add the MySQL jar back to the project
    classpath, the error changes back to 'No suitable driver' as above.
    The behavior is the same whether I use the Kodo preferences
    (Window\Preferences\Kodo Preferences) individually without a
    kodo.properties file, or when I clear all individual properties and
    indicate a kodo.properties file.
    Help?
    Thanks,
    Matthew
    Here's my kodo.properties file:
    # Kodo JDO Properties configuration
    kodo.LicenseKey: xxx
    javax.jdo.PersistenceManagerFactoryClass:
    kodo.jdbc.runtime.JDBCPersistenceManagerFactory
    javax.jdo.option.ConnectionDriverName: com.mysql.jdbc.Driver
    javax.jdo.option.ConnectionUserName: root
    javax.jdo.option.ConnectionPassword:
    javax.jdo.option.ConnectionURL: jdbc:mysql://localhost/kodo
    javax.jdo.option.Optimistic: true
    javax.jdo.option.RetainValues: true
    javax.jdo.option.NontransactionalRead: true
    kodo.Log: DefaultLevel=WARN, Runtime=INFO, Tool=INFO
    Here's my eclipse-3.0M8/plugins/kodo.eclipse_2.1.0/plugin.xml:
    <?xml version="1.0" encoding="UTF-8"?>
    <plugin id="kodo"
    name="%name"
    version="1.0.1"
    provider-name="%provider-name"
    class="kodo.jdbc.integration.eclipse.KodoPlugin">
    <runtime>
    <!--
    Put your jdbc driver in this directory and enter the filename
    here (and configure in Preferences the changes you make) -->
    <!--<library name="jdbc-hsql-1_7_0.jar"/>-->
    <library name="mysql-connector-java-3.0.11-stable-bin.jar"/>
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <library name="kodo-jdo.jar"/>
    <library name="jakarta-commons-collections-2.1.jar"/>
    <library name="jakarta-commons-lang-1.0.1.jar"/>
    <library name="jakarta-commons-pool-1.0.1.jar"/>
    <library name="jakarta-regexp-1.1.jar"/>
    <library name="jca1.0.jar"/>
    <library name="jdbc2_0-stdext.jar"/>
    <library name="jdo-1.0.1.jar"/>
    <library name="jta-spec1_0_1.jar"/>
    <library name="xalan.jar"/>
    <library name="xercesImpl.jar"/>
    <library name="xml-apis.jar"/>
    <library name="jfreechart-0.9.16.jar"/>
    <library name="jcommon-0.9.1.jar"/>
    <library name="mx4j-admb.jar"/>
    <library name="mx4j-jmx.jar"/>
    <library name="mx4j-tools.jar"/>
    <library name="jline.jar"/>
    <library name="sqlline.jar"/>
    </runtime>
    <requires>
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.resources"/>
    <import plugin="org.eclipse.jdt.core"/>
    <import plugin="org.eclipse.jdt.launching"/>
    </requires>
    <extension point="org.eclipse.ui.actionSets">
    <actionSet id="kodo.jdbc.integration.eclipse.actionSet"
    label="%action-set-name"
    visible="true">
    <menu id="kodo.menu"
    label="%group-label">
    <separator name="baseGroup"/>
    </menu>
    <action id="kodo.removeBuilder"
    label="%remove-builder-label"
    class="kodo.jdbc.integration.eclipse.RemoveBuilderAction"
    tooltip="%remove-builder-tooltip"
    menubarPath="kodo.menu/baseGroup"
    enablesFor="1">
    </action>
    <action id="kodo.addbuilder"
    label="%add-builder-label"
    class="kodo.jdbc.integration.eclipse.AddBuilderAction"
    tooltip="%add-builder-tooltip"
    menubarPath="kodo.menu/baseGroup"
    enablesFor="1">
    </action>
    <action id="kodo.mapping.build"
    label="%mapping-build-label"
    tooltip="%mapping-build-tooltip"
    class="kodo.jdbc.integration.eclipse.MappingToolAction$BuildSchema"
    icon="icons/BuildSchemaMappingTool.gif"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    <action id="kodo.mapping.drop"
    label="%mapping-drop-label"
    tooltip="%mapping-drop-tooltip"
    class="kodo.jdbc.integration.eclipse.MappingToolAction$Drop"
    icon="icons/DropMappingTool.gif"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    <action id="kodo.mapping.refresh"
    label="%mapping-refresh-label"
    tooltip="%mapping-refresh-tooltip"
    class="kodo.jdbc.integration.eclipse.MappingToolAction$Refresh"
    icon="icons/RefreshMappingTool.gif"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    <action id="kodo.enhance"
    label="%enhance-label"
    icon="icons/EnhancerAction.gif"
    class="kodo.jdbc.integration.eclipse.EnhancerAction"
    tooltip="%enhance-tooltip"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    </actionSet>
    </extension>
    <!-- lock our actions into the base perspective -->
    <extension point="org.eclipse.ui.perspectiveExtensions">
    <perspectiveExtension
    targetID="org.eclipse.ui.resourcePerspective">
    <actionSet
    id="kodo.jdbc.integration.eclipse.actionSet">
    </actionSet>
    </perspectiveExtension>
    </extension>
    <!-- put our extensions in -->
    <extension point="org.eclipse.ui.preferencePages">
    <page name="%preference-name"
    class="kodo.jdbc.integration.eclipse.KodoPreferencePage"
    id="kodo.jdbc.integration.eclipse.preferences.KodoPreferencePage">
    </page>
    </extension>
    <!-- lock in our eclipse-generated xml editor -->
    <extension point="org.eclipse.ui.editors">
    <editor name="%mappingeditor-name" extensions="mapping"
    icon="icons/mapping.gif"
    contributorClass="org.eclipse.ui.texteditor.BasicTextEditorActionContributor"
    class="kodo.jdbc.integration.eclipse.editor.XMLEditor"
    id="kodo.jdbc.integration.eclipse.editor.XMLEditorMapping">
    </editor>
    <editor name="%editor-name" extensions="jdo,schema"
    icon="icons/metadata.gif"
    contributorClass="org.eclipse.ui.texteditor.BasicTextEditorActionContributor"
    class="kodo.jdbc.integration.eclipse.editor.XMLEditor"
    id="kodo.jdbc.integration.eclipse.editor.XMLEditor">
    </editor>
    </extension>
    <!-- lock in our "view" -->
    <extension point="org.eclipse.ui.views">
    <view id="kodo.jdbc.integration.eclipse.KodoView"
    name="%view-name"
    category="org.eclipse.jdt.ui.java"
    icon="icons/kodosmall.gif"
    class="kodo.jdbc.integration.eclipse.KodoView">
    </view>
    </extension>
    <!-- lock in our builder -->
    <extension point="org.eclipse.core.resources.builders"
    id="kodo.jdbc.integration.eclipse.EnhancerBuilder"
    name="%builder-name">
    <builder>
    <run
    class="kodo.jdbc.integration.eclipse.EnhancerBuilder">
    </run>
    </builder>
    </extension>
    <!-- put our view onto the bottom bar -->
    <extension point="org.eclipse.ui.perspectiveExtensions">
    <perspectiveExtension
    targetID="org.eclipse.debug.ui.DebugPerspective">
    <view id="kodo.jdbc.integration.eclipse.KodoView"
    relative="org.eclipse.debug.ui.ExpressionView"
    relationship="stack"/>
    <viewShortcut id="org.eclipse.jdt.debug.ui.DisplayView"/>
    </perspectiveExtension>
    </extension>
    </plugin>

    I am not using Eclipse but am also experiencing problems similar to those
    described below. The fact is I cannot validate a configuration file which
    specifies either a traditional MySQL driver or a MySQL DataSoure.
    I am using what I believe to be the official stable version of MySQL
    Connector/J which is 3.0.11 and has been for several months. Are you
    saying that 3.0.11 (or even 3.0.8) is not supported?
    3.0.12 was recently released as the latest stable version so 3.0.11 should
    at least be supported by now.
    -Neil
    Stephen Kim wrote:
    The milestones are not fully supported versions as they are of beta
    quality. I would suggest using a proper release instead.
    Wolfgang Kundrus wrote:
    Hi,
    I have excactly the same behaviour here with Eclipse 3.0M8 and MySQL. It
    is the same with 3.0.8. and 3.0.11. What is the solution ?
    Best regards
    Wolfgang Kundrus
    Marc Prud'hommeaux wrote:
    Matthew-
    Everything looks correct to me. If you try an older version of the MySQL
    driver (such as 3.0.8, which is what we test with), do you see anything
    different?
    If you try to run a stand-alone Kodo program using the MySQL driver, do
    you see the same exception? If so, can you post the complete stack
    trace?
    In article <[email protected]>, Matthew T. Adams wrote:
    Hi,
    I'm getting the following error when trying to use the mapping tool from
    within eclipse 3.0M8:
    <error>-An error occurred running MappingTool
    kodo.util.FatalDataStoreException: No suitable driver
    NestedThrowables:
    java.sql.SQLException: No suitable driver
    <info>-Done.
    I followed the instructions on using the eclipse plugin, including copying
    all jars from kodo's lib folder to the plugin folder, copying the MySQL
    Connector/J jar to the kodo plugin folder, adding all of those jars to the
    project classpath, and even added an entry to the plugin.xml file to
    include the MySQL Connector/J jar. If I remove the project reference to
    the MySQL Connector/J jar, the error changes to:
    <error>-An error occurred running MappingTool
    kodo.util.FatalDataStoreException: com.mysql.jdbc.Driver
    NestedThrowables:
    java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
    <info>-Done.
    This would imply that adding a <library> entry for the MySQL Connector/J
    jar in my plugin.xml does not have any effect; if I remove it, I see the
    same error as above. If I add the MySQL jar back to the project
    classpath, the error changes back to 'No suitable driver' as above.
    The behavior is the same whether I use the Kodo preferences
    (WindowPreferencesKodo Preferences) individually without a
    kodo.properties file, or when I clear all individual properties and
    indicate a kodo.properties file.
    Help?
    Thanks,
    Matthew
    Here's my kodo.properties file:
    # Kodo JDO Properties configuration
    kodo.LicenseKey: xxx
    javax.jdo.PersistenceManagerFactoryClass:
    kodo.jdbc.runtime.JDBCPersistenceManagerFactory
    javax.jdo.option.ConnectionDriverName: com.mysql.jdbc.Driver
    javax.jdo.option.ConnectionUserName: root
    javax.jdo.option.ConnectionPassword:
    javax.jdo.option.ConnectionURL: jdbc:mysql://localhost/kodo
    javax.jdo.option.Optimistic: true
    javax.jdo.option.RetainValues: true
    javax.jdo.option.NontransactionalRead: true
    kodo.Log: DefaultLevel=WARN, Runtime=INFO, Tool=INFO
    Here's my eclipse-3.0M8/plugins/kodo.eclipse_2.1.0/plugin.xml:
    <?xml version="1.0" encoding="UTF-8"?>
    <plugin id="kodo"
    name="%name"
    version="1.0.1"
    provider-name="%provider-name"
    class="kodo.jdbc.integration.eclipse.KodoPlugin">
    <runtime>
    <!--
    Put your jdbc driver in this directory and enter the filename
    here (and configure in Preferences the changes you make) -->
    <!--<library name="jdbc-hsql-1_7_0.jar"/>-->
    <library name="mysql-connector-java-3.0.11-stable-bin.jar"/>
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <!-- ########### do not modify below ######### -->
    <library name="kodo-jdo.jar"/>
    <library name="jakarta-commons-collections-2.1.jar"/>
    <library name="jakarta-commons-lang-1.0.1.jar"/>
    <library name="jakarta-commons-pool-1.0.1.jar"/>
    <library name="jakarta-regexp-1.1.jar"/>
    <library name="jca1.0.jar"/>
    <library name="jdbc2_0-stdext.jar"/>
    <library name="jdo-1.0.1.jar"/>
    <library name="jta-spec1_0_1.jar"/>
    <library name="xalan.jar"/>
    <library name="xercesImpl.jar"/>
    <library name="xml-apis.jar"/>
    <library name="jfreechart-0.9.16.jar"/>
    <library name="jcommon-0.9.1.jar"/>
    <library name="mx4j-admb.jar"/>
    <library name="mx4j-jmx.jar"/>
    <library name="mx4j-tools.jar"/>
    <library name="jline.jar"/>
    <library name="sqlline.jar"/>
    </runtime>
    <requires>
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.resources"/>
    <import plugin="org.eclipse.jdt.core"/>
    <import plugin="org.eclipse.jdt.launching"/>
    </requires>
    <extension point="org.eclipse.ui.actionSets">
    <actionSet id="kodo.jdbc.integration.eclipse.actionSet"
    label="%action-set-name"
    visible="true">
    <menu id="kodo.menu"
    label="%group-label">
    <separator name="baseGroup"/>
    </menu>
    <action id="kodo.removeBuilder"
    label="%remove-builder-label"
    class="kodo.jdbc.integration.eclipse.RemoveBuilderAction"
    tooltip="%remove-builder-tooltip"
    menubarPath="kodo.menu/baseGroup"
    enablesFor="1">
    </action>
    <action id="kodo.addbuilder"
    label="%add-builder-label"
    class="kodo.jdbc.integration.eclipse.AddBuilderAction"
    tooltip="%add-builder-tooltip"
    menubarPath="kodo.menu/baseGroup"
    enablesFor="1">
    </action>
    <action id="kodo.mapping.build"
    label="%mapping-build-label"
    tooltip="%mapping-build-tooltip"
    class="kodo.jdbc.integration.eclipse.MappingToolAction$BuildSchema"
    icon="icons/BuildSchemaMappingTool.gif"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    <action id="kodo.mapping.drop"
    label="%mapping-drop-label"
    tooltip="%mapping-drop-tooltip"
    class="kodo.jdbc.integration.eclipse.MappingToolAction$Drop"
    icon="icons/DropMappingTool.gif"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    <action id="kodo.mapping.refresh"
    label="%mapping-refresh-label"
    tooltip="%mapping-refresh-tooltip"
    class="kodo.jdbc.integration.eclipse.MappingToolAction$Refresh"
    icon="icons/RefreshMappingTool.gif"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    <action id="kodo.enhance"
    label="%enhance-label"
    icon="icons/EnhancerAction.gif"
    class="kodo.jdbc.integration.eclipse.EnhancerAction"
    tooltip="%enhance-tooltip"
    menubarPath="kodo.menu/baseGroup"
    toolbarPath="Normal/Kodo"
    enablesFor="+">
    <selection class="org.eclipse.core.resources.IFile"
    name="*.jdo">
    </selection>
    </action>
    </actionSet>
    </extension>
    <!-- lock our actions into the base perspective -->
    <extension point="org.eclipse.ui.perspectiveExtensions">
    <perspectiveExtension
    targetID="org.eclipse.ui.resourcePerspective">
    <actionSet
    id="kodo.jdbc.integration.eclipse.actionSet">
    </actionSet>
    </perspectiveExtension>
    </extension>
    <!-- put our extensions in -->
    <extension point="org.eclipse.ui.preferencePages">
    <page name="%preference-name"
    class="kodo.jdbc.integration.eclipse.KodoPreferencePage"
    id="kodo.jdbc.integration.eclipse.preferences.KodoPreferencePage">
    </page>
    </extension>
    <!-- lock in our eclipse-generated xml editor -->
    <extension point="org.eclipse.ui.editors">
    <editor name="%mappingeditor-name" extensions="mapping"
    icon="icons/mapping.gif"
    contributorClass="org.eclipse.ui.texteditor.BasicTextEditorActionContributor"
    >>
    class="kodo.jdbc.integration.eclipse.editor.XMLEditor"
    id="kodo.jdbc.integration.eclipse.editor.XMLEditorMapping">
    </editor>
    <editor name="%editor-name" extensions="jdo,schema"
    icon="icons/metadata.gif"
    contributorClass="org.eclipse.ui.texteditor.BasicTextEditorActionContributor"
    >>
    class="kodo.jdbc.integration.eclipse.editor.XMLEditor"
    id="kodo.jdbc.integration.eclipse.editor.XMLEditor">
    </editor>
    </extension>
    <!-- lock in our "view" -->
    <extension point="org.eclipse.ui.views">
    <view id="kodo.jdbc.integration.eclipse.KodoView"
    name="%view-name"
    category="org.eclipse.jdt.ui.java"
    icon="icons/kodosmall.gif"
    class="kodo.jdbc.integration.eclipse.KodoView">
    </view>
    </extension>
    <!-- lock in our builder -->
    <extension point="org.eclipse.core.resources.builders"
    id="kodo.jdbc.integration.eclipse.EnhancerBuilder"
    name="%builder-name">
    <builder>
    <run
    class="kodo.jdbc.integration.eclipse.EnhancerBuilder">
    </run>
    </builder>
    </extension>
    <!-- put our view onto the bottom bar -->
    <extension point="org.eclipse.ui.perspectiveExtensions">
    <perspectiveExtension
    targetID="org.eclipse.debug.ui.DebugPerspective">
    <view id="kodo.jdbc.integration.eclipse.KodoView"
    relative="org.eclipse.debug.ui.ExpressionView"
    relationship="stack"/>
    <viewShortcut id="org.eclipse.jdt.debug.ui.DisplayView"/>
    </perspectiveExtension>
    </extension>
    </plugin>
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • Kudos to kodo JDO

    I first used Kodo JDO 2.3.2 last year while working for a client. I had
    tried (unsuccessfully) entity beans and Apache's OBJ before trying JDO.
    Unfortunately the client put the project on hold and I couldn't finish the
    project. Now I once again find myself working on a prototype for a customer
    that is perfect for JDO. I'm now using 2.4.1 and I'm amazed at the
    improvements. First off the Eclipse integration is wonderful. The second
    thing is the JCA/JTA integration with JBoss, works great! I'm also
    impressed that the price hasn't changed. LiDO, another JDO vendor, wanted
    $4,000/CPU for their Enterprise version. I told them how rediculous they
    were. Who do they think they are, WebLogic? Finally the support here is
    top notch.
    Anyway, this is just a note to say thanks and keep up the great work,
    especially with the Eclipse plugin!
    Michael

    Thanks Patrick, the patch fixes the serialization problems I was having...
    "Patrick Linskey" <[email protected]> wrote:
    Hello all,
    A new patch to Kodo JDO 2.2.1 is now available. It includes:
    - a fix that should make serialization (and, as a result,
    deserialization) of proxied second class objects such as HashMaps
    etc. work correctly.
    - Abe's jdoNewInstance() application identity fix (see Abe's post
    from last week).
    - a faulty invocation of Class.forName() that caused queries to fail
    in certain web application environments.
    - minor documentation and error reporting fixes.
    Links:
    http://www.techtrader.com/products/kodo-beta/tt-jdo-2.2.1-patched.jar
    http://www.techtrader.com/products/kodo-beta/tt-jdoee-2.2.1-patched.jar
    >
    As always, this interim release is of beta quality, as is appropriate
    for the beta news group. It contains fixes that may cause more problems
    than they solve, so treat it with care.
    Enjoy.
    -Patrick
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • EJB3/JBoss vs. JDO2/Kodo

    Hi,
    does anybody here per chance have a (convincing ;) feature comparison between
    EJB3/JBoss and JDO2/Kodo (Enterprise)? I'm not talking about support here, as that
    can be purchased for JBoss as well.
    My intuition says that Kodo is better technology and e.g. has better documentation,
    but unfortunately that's too subjective to convince anybody else (which I'd really
    like to).
    Thanks for any information,
    J__rg.

    Thank you a lot for this even longer answer! Whenever someone asks me again why we
    should go for Kodo, I'll give him a printout of this mail, just saying to him "read
    this" ;)
    Neelan Choksi schrieb:
    It is clear that Kodo 4 complies to EJB3 (although it is notproduction-ready
    currently).No implementation that supports EJB3 is production ready currently. That
    is simply because the EJB3 specification just moved into public draft 2
    weeks ago. There are still a lot of changes that are taking place in the
    EJB3 spec (simply compare the Early Draft to the Public Draft and you'll
    see how much things are changing). I am not criticizing the spec - I am
    just pointing out that the spec is at a specific point in its life cycle
    and there are a lot of really smart people including our CTO Patrick
    Linskey trying to make sure 1) it is a good specification and 2) trying to
    drive the specification forward to completion.
    That said there are a lot of O/R Mapping products that are production
    ready. I am going to limit my comments in this thread to focus on what
    Kodo, the product, does well and not go into the differences in the
    specifications.
    I would also encourage you to take a look at both specifications and
    determine which specification is right for you. Although fundamentally
    similar, I believe certain developers will feel more comfortable with JDO
    and certain developers will feel more comfortable with EJB3.
    One distinguishing characteristic about Kodo is that you can use EJB 3 and
    JDO 2 interoperably. This is a very big difference than some of the other
    products that are on the market esp. since each specification has things
    that the other specification is missing. Kodo gives you the best of both
    worlds and allows you to choose how much of each world you want to use.
    A pattern that we see emerging is that a lot of new customers are excited
    to use our JDO standard support today and get the productivity gains that
    Kodo provides today. While they wait to see if EJB3 Persistence meets
    their needs and while they wait for their companies to adopt the
    technology that EJB3 requires (JEE 5 and an app server that supports JEE
    5), they will get the benefits of a solid O/R mapping tool that is based
    on a standard. Over time, the interoperability that Kodo provides will
    allow them to migrate piecemeal to EJB3 Persistence as it matures, thereby
    managing their risk of adopting new technologies. Some of our customers
    will migrate quickly and some have no plans of migrating ever. In any
    case, Kodo provides a pathway for them.
    But I can get the JBOSS implementation of EJB3 for free, why should
    I pay
    more for Kodo when it doesn't give me more functionality or performance?Abe has highlighted a number of technical features that distinguish Kodo.
    I would encourage you to evaluate those features in a serious manner.
    Often times we see other products who claim to have a given feature that
    is in Kodo but when you actually use the feature, there are big
    differences in quality and functionality. For example, I know that a
    number of O/R mapping products claim to have a JMX management console but
    I also know that the depth of functionality that is specific to O/R
    mapping and ease of use of Kodo's Management Console was a big enough
    differentiator that clients have asked to purchase the Management Console
    standalone (FYI, to date we have not chosen to sell it standalone).
    Regarding performance, this is one of the hardest things to compare.
    We've been asked often if we have benchmarking numbers. The problem with
    benchmarks is that a benchmark can be written to say anything you really
    want. I would encourage you to benchmark for your specific application.
    When properly tuned, I have not seen Kodo lose a benchmark since we
    released Kodo 3 over a year ago. And we've seen some wins that have
    really excited us e.g., Kodo + relational database has beaten existing
    applications with object databases in performance / scalability
    comparisons.
    I also would like to highlight that a big differentiator between Kodo and
    other products is the business model each employs. SolarMetric's focus is
    selling product and occasionally selling services. As such, we do things
    like take the most common of our experiences with performance tuning and
    the most common support cases on performance / scalability and generalized
    them into a chapter on optimizations techniques in our documentation. We
    created the Management Console and Profiler to help customers resolve
    their own performance / scalability issues. Other organizations that have
    different models, e.g., solely services, are not motivated to provide this
    level of best practices documentation and / or tools because that cuts
    into their ability to provide services work.
    I'm often in the same situation like J__rg. Always defending the decision touse
    kodo (and doing so in future times) against statements like 'if we switch toEJB
    why stay with kodo if we can get everything with JBoss/Hibernate for "free" 'Ok, even the JBoss guys admit that they are a business (there are some
    recent interviews with Marc Fleury where he distinguishes between open
    source hobbyists and professional open source where the goal is to
    generate revenue). The reality is that how JBoss makes money is different
    than how SolarMetric makes money, but nonetheless JBoss has to make money
    to stay in business and to keep their investors happy just as SolarMetric
    does.
    What we've found interesting over the past few months is that many of our
    enterprise customers who need support services and indemnification
    protection have found that Kodo breaks even with open source solutions
    when analyzed over a 1-2 year span and is dramatically cheaper over a 5
    year span (simply comparing the costs of license and support between the
    two). When factoring in other things e.g., quality of support, attitude
    of support personnel, cost of training, quality of training, ease of use,
    cost / time of contributing to the open source project, cost of digging in
    the code of the open source project, Kodo has a lower Total Cost of
    Ownership (TCO).
    One final point to highlight with Kodo is that our model is not based on
    certain customers of ours subsidizing all of the rest of our customers.
    Such a model has a lot of business risk associated with it because the
    customers that are willing to pay for services are customers that are
    highly coveted. It is often dangerous to wake sleeping giants and put
    your business at risk (see
    http://www.forbes.com/home/intelligentinfrastructure/2005/06/15/jboss-ibm-linux_cz_dl_0615jboss.html).
    I guess the bottom line is that JBoss/Hibernate or other open source
    alternatives aren't free whether you look at out of pockets costs, TCO, or
    risk.
    Anyway. I'm convinced of kodo's robustnest - never had less problems whenusing
    a third party product. If you describe a feature in the manual usually itworks.
    Also the good pluggable architecture was worth its money ... it helpedeverytime
    other implementation surely would have let us down.
    Unfortunataly that are all things that you only realize, when you use it and
    they are pretty difficult to communicate to decision makers, who are aboveme ;-)
    Thank you for the compliments. I believe that the competition we've faced
    supporting standards has made us quite bit stronger than products that
    have been historically proprietary. The challenge with proprietary
    products (e.g., TopLink and Hibernate) was that once you used them, you
    have to re-write your entire data access infrastructure to switch away
    from them. Having to support standards (JDO historically and now EJB 3)
    for the past 4 years has meant that we've faced severe competition and the
    only way for us to succeed was to build superior quality products that
    have more functionality and better scalability and performance than
    anything else out there. If we didn't, our customers could simply pick up
    another of as many as 25 JDO implementations and try it out to see if
    there was something better.
    That said, the fact that our differentiators are a bit more subtle is one
    of the challenges that we face as a business. A lot of the things that
    differentiate Kodo e.g., Profiler, Management Console, Large Results Sets
    Handling, Custom Fetch Groups, Lock Groups, Query Editor, etc. are things
    that typically aren't evaluated very closely in a typical time-constrained
    evaluations, but are absolutely critical when deploying in production
    environments and when using Kodo with Agile Programming Methodologies.
    The pluggable architecture that you reference is also very important
    because many of our customers have constraints that prevent them from
    fitting into a specific box. Again when a prospect does an eval, they
    typically evaluate standard mappings and base functionality but the
    reality is that the real world often brings up issues that no matter how
    good the implementation is there will be something that isn't supported.
    In our opinion, a pluggable architecture provides a cleaner way of
    allowing customers to customize what they need rather than digging into
    someone else's source code.
    The things that we've heard from customers who have switched from other
    solutions are:
    - Kodo produces extremely clean SQL relative to other tools and
    consequently has better performance and scaling characteristics
    - Kodo's documentation and error messaging are extensive and makes
    debugging quite easy
    - Kodo's javadocs are extensive and provides a useful framework to
    customize a wide variety of things.
    - When tuning an application, Kodo provides a number of facilities and
    tools for doing so relative to any other O/R mapping tools out there.
    - Kodo's tooling is a large differentiator; in particular, the Kodo Query
    Editor, the Profiler and the Management Console are often cited as
    distinguishing features.
    Also, one additional benefit with SolarMetric is that we are independent.
    Our agenda is very clear - we are trying to create value for you such that
    we get paid. There are no hidden agendas e.g., using Kodo as a loss
    leader to sell you an object database, an application server, an entire
    stack of products, training, services, etc. We simply are trying to build
    a high quality product that meets our customers' needs that works well
    with all the other technology products out there.
    I hope between Abe's post and this post, we have helped you with arguments
    for your management.
    I'd encourage you to evaluate yourself and give us feedback.
    Also, do not hesitate to contact me directly with any specific questions
    or if you want me to review any documentation that you put together for
    your management.
    Thanks for your questions and interest. Stefan, thank you for being a
    loyal customer.
    Neelan Choksi
    SolarMetric
    [email protected]
    202-595-2064 (main)
    512-542-9707 (direct)
    Vote for Kodo JDO as the Best DB Tool or Driver (Category 2), the Best
    Java Data Access Tool (Category 7), and the Best Java Persistence
    Architecture (Category 11) in the JDJ Readers Choice Awards at
    http://jdj.sys-con.com/general/readerschoice.htm.

  • Master data is not getting displayed in the Report in Quality

    HI Experts,
    I have ZESP_Num as master for my transaction data.
    I have loaded the data in the Development and designed and checked my report. It is working fine.
    But after moving it to Quality. My report is not getting the master data.
    When I try to do right click manage ZESP_Num.
    I'm getting message like this.
    The data from the InfoProvider ZESP_NUM involved could not be checked
    Message no. DBMAN242
    Diagnosis
    A participating InfoProvider is virtual and aggreement between results and selections cannot be guaranteed.
    Can anyone plz help me.
    Thanks

    Hi,
    Please check the SNOTE Note 1364083 u2013 Message text will appear unexpectedly in surface.
    Hope helpful.
    Regards,
    RAO

  • Can I return my new laptop to lenovo because of its quality problem?

    IBM is one of the best brand as I know, especially showing us not only high quality of products but the service.
    But now one thing happened to me, if I don't have this case I can't believe. I found there is something with the laptop(T61 Model 7658CTO) the same day when I got the new laptop on June 3rd, 2008. Then I call 800, a lot of 800 calls,at last I were told that I should contact send it back first if I were to get another new one. Three days later I got the box sent by Lenovo, and I return it back in terms of what the paper shown.
    Unfortunately the problems of the laptop still exist when I turn on my laptop after it repaired. What's wrong with my laptop, this is new one, I think IBM should provide high quality products.
    I don't know how to describe the weird sound , it sounds like two or more micphone opened clost to each other. It is very harsh.
    I want to get one new laptop from Lenovo because of the quality of product and customer service or can I return the laptop?
    I would appreciate any advice you can give me.
    Message Edited by hjwu1979 on 06-27-2008 07:18 AM

    Like Obama says... yes you can
    I did it last week because I bought an iPhone 4 as a wedding gift but then I changed my mind and wanted an iPad, so the bride could use it together with the groom (while the phone is personal).
    I returned the iPhone to the apple store and I paid the difference to buy the 32GB iPad, no problems
    I didn't opened the case, but the guy at the apple store told me I could have returned the phone even if it was opened. It must be undamaged tough.

  • Iphone 5S call voice Low quality.

    Iphone 5S  low quality call voice.  When I make a phone call , people in the other side said my voice is very small and disconnected. The voice is good when I am using the earphone at the same calls

    Hello tigertiger123abd,
    The following article can help sort your iPhone's built-in microphone.
    iPhone: Microphone issues
    http://support.apple.com/kb/TS5183
    Cheers,
    Allen

  • Questions on default print quality and print order

    I recently upgraded to CS 5.5, a clean install on a new Mac running 10.7. I've found two issues regarding printing from both Acrobat and InDesign:
    In both programs, if I go directly to print (command-P, then hit OK without changing settings), documents will print at my printer's default setting, which is for a high-quality photo on glossy paper. (I haven't figured out yet how to change that default.) I want it to print to my faster standard quality  setting, which is generally also the "last-used" setting. I can of course go into printer settings to fix -- each time I do that, my standard setting is already set in that window -- but I have to make that move every time I print.
    Previous versions of both programs (CS3) automatically went into the last-used or standard setting. How can I get that back?
    Also, print order hasn't been working as it did in CS3. Documents print first-page-first in CS 5.5, unless I check "Reverse order"; in CS3, docs printed last-page-first automatically, without checking any additional box, making documents easier to collate. Also, on many documents, "Odd pages only" prints the even ones, and vice-versa.
    How can I get the automatic reverse order and correct odd/even back in Indesign 5.5 and Acrobat X Pro?

    Hi Davis,
    Well, if you are able to log on the BPS application (thus, a Web Interface if I'm not mistaken) then it means that some user is logged on the ABAP-side. It is either a "personal" user or a technical (common) user, that is used for anonymous access.
    You will certainly be able to track down this user with the SM04 transaction.
    You should be able to add a button to the BPS page. One idea would be to call an URL with an ICF service behind that will be able to process the parameters contained in this URL.
    It will be this very ICF service that will call the BAPI.
    Try these links
    http://help.sap.com/saphelp_nw04s/helpdata/en/55/33a83e370cc414e10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ea8db790-0201-0010-af98-de15b6c1ee1a
    Hope this helps.
    Best regards,
    Guillaume

  • How many times can you "Export - Quicktime movie" the same ProRes 422 file until it starts to lose quality?

    Lets say you are done with your project & you want an uncompressed copy of it as your master. If you keep importing it back into Final Cut Pro, & keep exporting (Quicktime movie), will it ever start to lose quality?

    I've never seen a number given, but would think it is very high.
    Apple describes ProRes here:
    About Apple ProRes
    And in that document says:
       "while remaining visually lossless through many generations of decoding and reencoding."
    I don't think I've ever seen degradation caused by recompression or multiple compassion, but as a workflow if you output a master file, then re-import it to add graphics and export that, you would only be 1 generation down from the master file. If you wanted to make multiple versions, you can always go back to, and re-import,  the master file and make the changes, then export, so all you exports stay only 1 generation down from the master file.
    MtD

  • Let us solve this Verizon Service Quality debate for good

       Motorola Droid here.
         I don't frequent this forum too often, I probably glance at it once a week or so, but it doesn't take a regular visitor to take notice that this forum is always peppered with 3G complaints, some more or less valid than others.  However, after reading through a couple different threads tonight I'm afraid I'm going to have to hop on the bandwagon and question my 3G speeds.
         First off, I saw somebody say that 90-100 dBM isn't very good.  I am always stuck at around 98 dBm in my home though, and when I'm out and about I find it never gets below the high 80s.  Is this normal?  Also in my Settings>About Phone>Status menu, it constantly  says I am on an "Network Unknown."  This occurs and remains constant despite any combination of settings, including  roaming being on or off, and whether or not I am on 3G, 1X, or WiFi.
         Speaking of WiFi...when I am at home I use my home router as opposed to 3G for obvious reasons, which is a NETGEAR N router, yet I still only average about 2.5 mbps when connected to it or any wifi elsewhere.  Yes, I have tried turning my routers security off, and yes, I do have it in b/g mode.  I have also done the 228 number thing, the disabling of voice privacy suggestion, turning roaming on/off, several factory resets, etc. 
        Basically I'm just curious to know if anybody else is having similar issues, and what your average speeds are.  Infact, I'm going to make a little form below this paragraph for anyone who is interested, and rename the title of this post.  Perhaps we can congregate enough responses to come up with an average and then determine whether we are receiving acceptable service or not:
    Phone model:________
    What is your...
         Average 3G download speed? _____dBm
         Average WiFi download speed? _____dBm
        Any other comments that are pertinent___________

    first...how do i check my signal and speed? do i NEED an app or is there somewhere on the phone to do this? (i've seen others talk about 85-95 dBM, but never knew how to check). let me know and i'll post the results.
    also, one other thing you can try....and i MUST stress that i know little about this, except that it seemed to have improved my quality (voice/sound....my phone sounded "muffled" on both ends) on both ends, but i'm not sure if it actually increased my signal strength?
    call ##778
    select "edit mode"
    enter the password: 000000
    go into the "cdma settings"
    change home page,"  "home orig," and  "roam orig"  from evrc to evrc-b
    hit menu and "commit modifications."
    from what i gather, evrc-b is better technology, but is not supported everywhere. so if you try this and your signal is worse (or gone) then you'll need to follow the steps to switch it back. you will also want to save these instructions to your phone in case you find yourself somewhere that appears to be getting poor or no signal, so you can switch it back.

  • Large pdf(40MB) with 600 pages color quality is poor if i print more than 1 or 2 pages at a time.

    I have a 40MB PDF file with over 600 pages of color images. If i print each page one at a time, there is no issue whatsoever with the quality, but if i print say 5 pages, the 4th and 5th page and any subsequent page will look like rubbish. In general, just not the same color quality as before. The Konica Minolta printer doesn't seem to be at fault here because like i mentioned, this problem does not occur if i print individual sheets, just if i print more than 5 or so.
    Printing individual sheets in this instance, however, is not a real solution because in all the job will have many thousand pages. What do i do? Any one here experience a similar problem?

    You were able to catch me whilst my Konica Minolta tech was here.
    This is my take on his diagnosis,
    The c353 while capable of quality output is not exactly a robust production level machine.
    Without the aid of a RIP, all the processing needs to happen either in memory or it may have a hdd. Does the machine have a hard drive?
    In either case, you are running out of memory.
    Changing the printers properties on spooling may aid in completing the job.
    He mentioned sending the job as raster. Print as Image from Acrobat's Advanced tab would do that, but I think you would cripple your computer doing such. Try it; be prepared to go out to lunch or something while it churns away.
    hth

  • A/P Credit memo after GRIN Quality Inspection

    Hi all,
    We have to implement GRIN Quality Check module for one implementation. What we have done in the add on that for one GRIN, we go to Quality Check Program where item wise we enter the Accepted Quantity and Rejected Quantity . Then for all the Rejected Quantity we move the items from Raw Material W/Hs (GRIN W/Hs) to Rejection W/Hs and then create the invoice for complete GRIN Quantity for all the items . We have thought  to return back the rejected items using A/P Credit memo which will both nullify the Invoice affect as well as Stock Affect . But there is a problem that we are encountering during A/P Credit memo :
    When we perform Copy From from A/P Invoice to A/P Credit Memo , all the details form A/P Invoice are copied don to A/P Credit memo , but we want to return the goods not from RM WHs but from the Rejection WHs as we had already moved the Rejected Items stock to Rejection WHs ,when we change the code of Warehouse from RM WHs to Rejection WHs , it removed the tax code from Tax Code columns for that row and later does not even allow to edit the Tax Code for that row without which we are not able to save the A/P Credit memo.
    please suggest the best way of implementing the complete scenario so that we could both be able to maintaining the separate stock of Rejected Items, Return it back after creating the complete Invoice . We also wanted to create the complete invoice after GRIN irrespective of the no. of items rejected.
    Thanks and Regards,
    Pooja Singh.

    Hi,
    Might be I have elaborated too much thats why I have not received any solution so far.
    Actually I just want to confirm one thing first, that we are working for GRIN Inspection as follows :
    1. GRIN
    2. Inspection of all the items in a particular GRIN
    3.Stock Transfer of Rejected Items to Rejection Locations
    4.Now we want to create Complete Invoice that is invoice for Complete Quantity irrespective of Rejected Qty.
    5.Now Returning back the Rejected Qty of items from Rejection Whs and Credit note for Vendor to reduce the due balance.
    But when I try to create A/P Credit memo from AP invoice I want to change the WhsCode from RM Whs to Rehection Whs which if done , Tax code disaapears and disables.
    What should be done.
    Thanks and Regards,
    Pooja Singh.

Maybe you are looking for