[svn:bz-] 24024: recreating the branch from BlazeDS4. 6 branch in place of trunk, as per the discussion from stake holders and Rohit's recommendation.

Revision: 24024
Revision: 24024
Author:   [email protected]
Date:     2012-07-31 03:05:09 -0700 (Tue, 31 Jul 2012)
Log Message:
recreating the branch from BlazeDS4.6 branch in place of trunk, as per the discussion from stake holders and Rohit's recommendation.
Removed Paths:
    blazeds/branches/4.6_Apache/

Similar Messages

  • The best CC's for all the different categories. Giving and looking for recommendations.

    Hopefully this will help others just as much as it'll help me. I'm looking to add one or two more CC's (maybe more, you know how it is =P) to my wallet soon to end up with a total of 4-5 maybe 6 CC's.  I currently have a Sears store card $6k, QS1 $9k, and DCU Platinum $15k.  I'm trying to cover all my bases to round out my wallet for best savings, utilization, and earnings... 1) Low APR: I already have my low APR CC in case i need to carry a balance - DCU Platinum Visa with a 8.5% APR.  It was super easy to qualify for.  The instantly approved me when i applied for my auto loan.  They offered a $5k limit, i told them i needed a $20k limit, the countered with a $15k limit.2) Everyday Cash Rewards: I already have an everyday cash rewards CC for more of my spending - QuicksilverOne which is of course 1.5% cash back on everything (I plan to PC to Quicksilver with no AF or maybe get a JCB Marukai [3% cash back but really low credit limit] or Citi Double Cash [2% cash back})3) Gas Rewards: I need a gas rewards card since i work in the auto industry and i spend a lot on gas. Considering a Sallie May MasterCard for the 5% cash back on the first $250 of gas.  Only thing about that is i spend much more than $250 on gas a month.  What else would you folks recommend?4) Super High Limit: I'm also looking for an easily attainable super high limit CC.  There's been talk about Lowes offering credit lines in the $30k+ range.5) Rotating Rewards: Looking for a rotating rewards card.  I'll probably apply for a Discover IT card since i have good history with them, paid on time and in full and closed account years ago.  Some people advise to get two rotating rewards cards, so maybe Chase Freedom?6) "Baller Status": Maybe looking for the "baller status" card eventually but this is less of a priority for me.  Maybe Chase Sapphire Preferred or Amex Gold or Platinum?  CSP is made of metal and looks nice.  Amex seems to cater to the upper percentile. Any suggestions to consider?  Other categories i may have missed?

    Monoglot wrote:
    jfriend33 wrote:
    kdm31091 wrote:
    Discover IT is much nicer than Freedom IMO, on a cash back basis (without a CSP). Broader categories. You don't really need both unless you want both, because they are very similar throughout the year albeit at different times. General spend, well QS and DC are really not terribly different. If you have the QS, no reason to really rush and upgrade to a DC unless your uncategorized spend is very large as the difference is not going to be great. As far as gas yes the Sallie is an option for 5% up to 250, then you could have a second card like BCP or Cash Rewards, etc for 3% on the remaining gas spend. BCP does have an AF though. "Amex catering to the upper pecentile" is just a mere stereotype. They aren't any more high end than any other issuer. Point is, if you aren't really interested in their products, getting an Amex just to have an Amex is not necessary. As for CSP, run the numbers. Takes a good deal of spend to justify the fee. This +1 Couldn't agree more!  Unless you are a churner, which if you are, you better get that CSP/Freedom ASAP.... I will say as much as I hate chase bank, you could wait for the freedom to have the $200 sign up bonus, and double app for the CSP. This would net you 60k UR points which are slightly more versatile than Amex MR points. Then you can earn 5% rotating, dont take cash, send points to CSP, then xfer those to your company of choice...and only use CSP for dining/travel expenses. The 95 dollar fee is kinda high. If you only spend 500 a month on dining/travel, thats 6 grand in a year, or 12k UR points. If they were redeemed at their 2cents a point average, thats only $240... less the fee is 145. Not to mention their min spend would be 3500 for csp/freedom in 90 days.  Bank Of America may be your ticket. And I am not just saying that because I have one in my signature. However, they do offer generous limits, I have seen as high as $40k. Their rewards are good but not great though.  Your best bet is a trio of cards like you said.  Discover for double cash back (great for black friday shopping).  How much do you spend a month on everything?  Its going to boil down to which categories you need besides gas. How much gas a month do you spend? What are your scores? The marukai is nice but you have to spend a certain amount first to even begin earning the 3%....scroll down to spending table: http://www.doctorofcredit.com/credit-cards/cash-back/marukai-premium-jcb-card-review/  I recently discovered this and saw you have to spend a min of 20 grand a year just to get the 3%. I am sure I could run everything on a card, but 3% cash back...not great compared to other rewards and signup bonuses.... Would I say NO if chase offered me the CSP and freedom? Maybe...probably not. But still, the 2 grand in 90 days you would spend for Amex ED and PRG, that could net you up to 1500 or more in rewards..... You spend 4 grand with the CSP you will barely get 800.  Is that table accurate?... I wanted to get that card, but if that table is accurate, I see no point of getting the JCB for a college student like myself.On small spends, the difference between a tiered 3% and a 2% card is going to be very small.   I didn't know about this though: For every $100 you spend you’ll receive 1-3 JCB cash back points, depending on which tier you’re currently in (see above). This is calculated on a per month basis (e.g if you make $99 in purchases for a month you’ll receive no cash back points). That reduces the earnings, as on average you could lost the points on $50 each month, and up to $99.99.    But if you can put  nearly all except your 5% spend on it, I guess that doesn't matter that much.

  • My Apple ID for the Discussions is an old and discontinued email.  Does that matter? FindLaw deleted old email accounts and discontinued support.

    My profile page shows a current email address.  If Apple needed to send information to me, would they use the email in my profile or the original email?  Is the process for updating the same as Neil pointed out in this discussion?  I do not have a rescue address, nor do not know my security questions.

    You have replied to a four month old question - WITH Personal Identifying Info - a VERY BAD IDEA - I have asked the Hosts to edit out these
    If you have a question, you have not stated it. I recommend that you start over by creating a Question of your own by clicking | New | > Discussion
    Briefly state your issue in the Title and tell us the Full Story in the body
    Quoted from  Apple's "How to write a good question"
       To help other members answer your question, give as many details as you can.
    Include your product name and specs such as processor speed, memory, and storage capacity. Please do not include your Serial Number, IMEI, MEID, or other personal information.
    Provide the version numbers of your operating system and relevant applications, for example "iOS 6.0.3" or "iPhoto 9.1.2".
    Describe the problem, and include any details about what seems to cause it.
    List any troubleshooting steps you've already tried, or temporary fixes you've discovered.
    ÇÇÇ

  • E72: Removing Email account from a restore and lar...

    I removed all my push mail accounts and went manual because my battery life is longer that way. I also purged my account from email.nokia.com. Recently, my IMAP mail from google stop working. I thought it was because of some corrupted data in the phone, so I tried removing and recreating the mailbox numerous times, reboot and all but nothing works. It simply doesn't get mails past a certain date/time.
    I did a hard reset, shift+space+backspace+poweron and re-setup everything. I still encountered the same problem on my E72. So I did a restore using the Ovi application. Nightmare begins.
    I restored the 'push mail' accounts which no longer exist because I have previously removed them from email.nokia.com. Now when I try to remove the mailbox, it says 'you need to connect to xx to remove it.' Obviously I cannot connect because that account no longer exists. And the 2 mailboxes spam me with prompts for passwords and neither can log on. I had to hard-reset again and setup everything from scratch thanks to this stupid problem.
    I eventually traced the problem to 2 emails which are exceptionally long. People just reply and reply and reply until the mail became about 20 pages long. Let's say I have 30 mails in my gmail 1-30. Mail 10 and 15 are the long ones. If I sync, it will only retrieve 1-9, then get stuck there. If I go to gmail to delete 10, it will sync until 14 then get stuck there. When it is 'stuck', it just connects and disconnects right away, without attempting to retrieve my mail 16-30.
    I tried again using pushmail. Push mail works with that large mail 15 probabaly because it is retrieved server-side.
    Please advise. How to solve these issues? It is quite frustrating... I don't want to use google's gmail application because you can't save attachments with that. And PDFs all become text files in gmail app's viewer.

    Open Mail on the old laptop and in the Accounts section of its preferences click on your email account and then on the "-" button at the bottom. Then go into the User/Library folder and delete the Mail folder to remove all the old emails.
    Happy Holidays
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 08 libraries and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.

  • Is it possible to save/export Messages created in the Discussions Panel?

    Is there a way to save/export messaegs created in the discussions panel?  For example, when saving a report to PDF or Excel, is it possible to include the discussion messages?  Is there a way to query the Discussion Messages so that they could appear on a report?

    You can't query for discussions directly in a report.  In the CMS database this information in a binary format.
    You would have to either use the Query Builder tool or write a program using the SDK to get to it.  The query you would run looks something like this:
    Select *
    from CI_INFOOBJECTS
    where SI_PARENTID = (SI_ID of the report the discussion is attached to)
      and SI_HIDDEN_OBJECT = 1
      and SI_KIND = 'DISCUSSIONS'
    -Dell

  • Can I transfer my entire playlist from itunes to my ipad without having to recreate the playlist again?

    Can I transfer my entire playlist from itunes to my ipad without having to recreate the playlist again?

    Tammiev36 wrote:
    It is not setting up my playlist it is just auto filling from the playlist
    I do not understand.  Is it filling a playlist from the songs in a different playlist?

  • [svn:fx-4.0.0] 13647: this should actually fix the build - call the modified main target to call the bundle task for osmf and actually make the call from frameworks /build.xml

    Revision: 13647
    Revision: 13647
    Author:   [email protected]
    Date:     2010-01-19 17:04:22 -0800 (Tue, 19 Jan 2010)
    Log Message:
    this should actually fix the build - call the modified main target to call the bundle task for osmf and actually make the call from frameworks/build.xml
    QE notes:
    Doc notes:
    Bugs:
    Reviewer:
    Tests run:
    Is noteworthy for integration:
    Modified Paths:
        flex/sdk/branches/4.0.0/frameworks/build.xml

    Hi Renuka,
    The model classes get generated under gen_cmi folder at the time of model import.
    As far I know, the Web Dynpro build operation also re-generates these classes if reqd.
    That is why I asked you to delete the gen_* folder content & do a Project Rebuild operation.
    Or you can delete the model & try importing it again.
    If the problem persists with the generated classes then there is an issue/bug with the generation & I would recommend to raise an OSS message (WD Java component)
    Kind Regards,
    Nitin

  • [svn] 4842: Backport of DCRAD changes from gumbo to Flex3.x branch.

    Revision: 4842
    Author: [email protected]
    Date: 2009-02-04 14:48:13 -0800 (Wed, 04 Feb 2009)
    Log Message:
    Backport of DCRAD changes from gumbo to Flex3.x branch.
    This includes the CallResponder, HTTPMultiService, and
    changes to RPC components to support FB tooling.
    It also includes two new properties on AbstractProducer + Consumer:
    priority and maxFrequency. Adds MXML components for:
    AsyncToken, CallResponder, MultiTopicConsumer and Producer
    and HTTPMultiservice.
    For the most part, these files are now identical to those the
    rpc.swc files on main.
    QE: Yes, eventually needs a run of the rpc.swc tests
    checkintests: pass. Also ran blazeDS and LCDS checkintests against this rpc.swc
    Doc: Same components we are doc'ing for gumbo
    Reviewers: reviewed by various folks when they were put in on main
    Modified Paths:
    flex/sdk/branches/3.x/frameworks/projects/rpc/manifest.xml
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/messaging/AbstractConsumer.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/messaging/AbstractProducer.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/messaging/MessageAgent.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/AbstractInvoker.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/AbstractOperation.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/AbstractService.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/AsyncToken.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/events/AbstractEvent.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/events/ResultEvent.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/HTTPService.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/mxml/HTTPService.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/mxml/Concurrency.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/remoting/Operation.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/remoting/RemoteObject.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/remoting/mxml/Operation.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/remoting/mxml/RemoteObject.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/soap/Operation.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/soap/mxml/WebService.as
    Added Paths:
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/CallResponder.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/AbstractOperation.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/HTTPMultiService.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/Operation.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/SerializationFilter.as
    flex/sdk/branches/3.x/frameworks/projects/rpc/src/mx/rpc/http/mxml/HTTPMultiService.as

    This link should resolve the authentication issue with Samba and Windows 7 clients. http://www.builderau.com.au/blogs/codemonkeybusiness/viewblogpost.htm?p=33927074 6
    I was having problems getting a Win2k8 Server to authenticate to my Mac OS X 10.6.2 Server acting as a primary domain controller. But when I followed the instructions on this link to change the client network security option to allow authentication using LM or NTLM and not NTLMv2 exclusively, then my Mac Server was able to authenticate the the Win2k8 Server. It solved my problem and hopefully solves yours to.
    Cheers

  • Recreating the light sensor from the complete sensor template

    Hi, new to this board and a total newbie to LabView. Tried recreating the light sensor from the complete sensor template for use with similar functioning homebrew sensor. Turned out to be quite difficult. Anybody around here thats already done that or some similar sensor block that you could share as a template to examine and modify? If the original light sensor block would not be locked I would have started from that.
    I also wonder if ther is any way to get digital line on pin 6 to behave as digital line on pin 5 does with the "generate light" function (would be great for making a color sensor with to different colored LED:s as illuminationMessage Edited by PTP on 05-10-2007 03:11 PM

    More or less trying to do a copy of the original light sensor block included with the Mindstorms software by starting out with the complete sensor template in the LabView Tooolkit (I have the student version 7.1) Planning on adopting it for my own analog sensor application later on (using the light out on pin 5 for another on/off application). Can´t remiport the version in the Mindstorms software so I have to start out from the template. My application of cause works with the Mindstorms light sensor block too byt it would be cool to make it from complete sensor template with the toolkit using my own scaling algoritms etc
    Michel Gaspari has a version made from the simple sensor template here:
    http://www.extremenxt.com/cdssensor.html
    But I would like to get the terminals for scaling and logic termial with a cut of value into the block too

  • [svn:osmf:] 10248: Fix a few bugs related to the interaction between IPlayable, IPausable, and ITemporal within a SerialElement, specifically around ensuring that the transition from child to child happens in the various permutations of these traits .

    Revision: 10248
    Author:   [email protected]
    Date:     2009-09-14 16:45:00 -0700 (Mon, 14 Sep 2009)
    Log Message:
    Fix a few bugs related to the interaction between IPlayable, IPausable, and ITemporal within a SerialElement, specifically around ensuring that the transition from child to child happens in the various permutations of these traits.  Introduce a helper class for managing this logic, as it can happen in both CompositePlayableTrait and CompositeTemporalTrait.  This lays the groundwork for a MediaElement which only implements IPlayable (e.g. to ping a tracking server) working within a serial composition.  Beef up unit tests so that these cases don't get broken in the future.
    Modified Paths:
        osmf/trunk/framework/MediaFramework/.flexLibProperties
        osmf/trunk/framework/MediaFramework/org/openvideoplayer/composition/CompositePlayableTrai t.as
        osmf/trunk/framework/MediaFramework/org/openvideoplayer/composition/CompositeTemporalTrai t.as
        osmf/trunk/framework/MediaFrameworkFlexTest/org/openvideoplayer/composition/TestSerialEle ment.as
    Added Paths:
        osmf/trunk/framework/MediaFramework/org/openvideoplayer/composition/SerialElementTransiti onManager.as

    Hi,
    Found a note explaining the significance of these errors.
    It says:
    "NZE-28862: SSL connection failed
    Cause: This error occurred because the peer closed the connection.
    Action: Enable Oracle Net tracing on both sides and examine the trace output. Contact Oracle Customer support with the trace output."
    For further details you may refer the Note: 244527.1 - Explanation of "SSL call to NZ function nzos_Handshake failed" error codes
    Thanks & Regards,
    Sindhiya V.

  • [svn] 3687: Add some messaging tests for scenarios that involve subscribing to a destination , disconnecting from the channel currently being used and resubscribing to the destination .

    Revision: 3687
    Author: [email protected]
    Date: 2008-10-16 11:23:39 -0700 (Thu, 16 Oct 2008)
    Log Message:
    Add some messaging tests for scenarios that involve subscribing to a destination, disconnecting from the channel currently being used and resubscribing to the destination. There is an issue that is causing the streaming/multipleDisconnectsResubscribesTest.mxml test to fail that I will log a bug for.
    Added Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/disconnectResubscribeJMSTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/disconnectResubscribeTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/multipleDisconnectsResubscribesTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/disconnectResubscribeJMSTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/disconnectResubscribeTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/multipleDisconnectsResubscribesTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/disconnectResubscribeJMSTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/disconnectResubscribeTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/multipleDisconnectsResubscribesTest.mxml

    Revision: 3687
    Author: [email protected]
    Date: 2008-10-16 11:23:39 -0700 (Thu, 16 Oct 2008)
    Log Message:
    Add some messaging tests for scenarios that involve subscribing to a destination, disconnecting from the channel currently being used and resubscribing to the destination. There is an issue that is causing the streaming/multipleDisconnectsResubscribesTest.mxml test to fail that I will log a bug for.
    Added Paths:
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/disconnectResubscribeJMSTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/disconnectResubscribeTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/longpolling/multipleDisconnectsResubscribesTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/disconnectResubscribeJMSTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/disconnectResubscribeTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/polling/multipleDisconnectsResubscribesTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/disconnectResubscribeJMSTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/disconnectResubscribeTest.mxml
    blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/messagingService/unsubscribeScen arios/streaming/multipleDisconnectsResubscribesTest.mxml

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • What is the best way to drop and recreate a Primary Key in the Replication Table?

    I have a requirement to drop and recreate a primary key in a table which is part of Transaction replication. What is the best way to fo it other than remove it from replication and add again?
    Thanks
    Swapna

    Hi Swapna,
    Unfortunately you cannot drop columns used in a primary key from articles in transactional replication.  This is covered in
    Make Schema Changes on Publication Databases:
    You cannot drop columns used in a primary key from articles in transactional publications, because they are used by replication.
    You will need to drop the article from the publication, drop and recreate the primary key, and add the article back into the publication.
    To avoid having to send a snapshot down to the subscriber(s), you could specify the option 'replication support only' for the subscription.  This would require the primary key be modified at the subscriber as well prior to adding the article back in
    and should be done during a maintenance window when no activity is occurring on the published tables.
    I suggest testing this out in your test environment first, prior to deploying to production.
    Brandon Williams (blog |
    linkedin)

  • How can I recreate the Clarity effect in Photoshop CS6?

    Hi,
    I want to recreate the Clarity effect but in Photoshop CS6.. I know I have it in Lightroom and Camera Raw, but for academic purposes I need to create the same effect but using ONLY Pohotoshop CS6...
    Thanks,
    Juan Dent
    Message title was edited by: Brett N

    The original "clarity" technique is just a Midtone contrast adjustment, the original idea as far as I know is from Mac Holbert formally of Nash Editions. Recipe is below but Mac used to have an action on his web site (I have it too and could send). Or build your own. Here's the steps:
    Mac Holbert's Midtone Contrast 
    1. Highlight your top layer in your Layers Palette then: 
    2a. In CS I: Select Layer->New->Layer to create a new, blank layer at the top of your Layer Stack. Then, holding down 
    your Opt Key (Mac) / Alt Key (PC), select Merge Visible from the fly-down menu on the right side of your Layers Palette. 
    Be sure to keep the Opt/Alt depressed until you see the blank layer update. You should now have an additional layer at the 
    top of your layer stack. It represents how the image would appear if you had flattened your layers. Rename this layer 
    “Midtone Contrast” 
          -OR- 
    2b. In CS II: Holding down your Opt Key (Mac) / Alt Key (PC), select Merge Visible from the fly-down menu on the right 
    side of your Layers Palette. Be sure to keep the Opt / Alt depressed until you see the blank layer update. You should now 
    have an additional layer at the top of your layer stack. It represents how the image would appear if you had flattened your 
    layers. Rename this layer “Midtone Contrast” 
    3. Next double-click on the Midtone Contrast layer icon to bring up the Layer Style Palette. Change the Blend Mode to 
    Overlay and lower the Blend Mode Opacity to 20%. Now move the left “This Layer” slider to 70. Now Split away the left 
    side of that slider by holding down the Opt / Alt key and move it to 50. Repeat the same process on the right “This Layer” 
    slider, moving the sliders to 185 and 205 respectively. Then select “OK”. 
    4. Now select Filter->Other->High Pass. In the High Pass Palette set the radius to 50 and select “OK”. Now select Image- 
    >Adjustments->Desaturate. The Midtone Contrast layer is now complete. At 20% opacity it should be very subtle, but 
    noticable. The effect can be decreased or increased by raising or lowering the Midtone Contrast Layer opacity. I’ve found 
    that the proper setting can usually be found between 20% and 40% opacity. Above 40% one risks creating “halo” artifacts 
    that are visually distracting. 

  • Export/Import full database dump fails to recreate the workspaces

    Hi,
    We are studying the possibility of using Oracle Workspace to maintain multiple versions of our business data. I recently did some tests with the import/export of a dump that contains multiple workspaces. I'm not able to import the dump successfully because it doesn't recreate the workspaces on the target database instance, which is an empty Oracle database freshly created. Before to launch the import with the Oracle Data Pump utility, I have the "LIVE" workspace that exists in the WMSYS.WM$WORKSPACES_TABLE table. After the import is done, there is nothing in that table...
    The versions of the Oracle database and Oracle Workspace Manager are the same on source and target database:
    Database version (from the V$VERSION view):
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Workspace Manager version (from the WM_INSTALLATION view):
    OWM_VERSION: 10.2.0.4.3
    In order to recreate the tablespaces successfully during the full import of the dump, the directory structure for the tablespaces is the same on the target database's computer. I used the instructions given in this document, section "1.6 Import and Export Considerations" at page 1-19:
    http://www.wyswert.com/documentation/oracle/database/11.2/appdev.112/e11826.pdf
    Considering that the release of Oracle database used is version 10.2.0.4, I use the following command to import the dump since it doesn't contain the WMSYS schema:
    impdp system/<password>@<database> DIRECTORY=data_pump_dir DUMPFILE=expfull.dmp FULL=y TABLE_EXISTS_ACTION=append LOGFILE=impfull.log
    The only hints I have in the import log file are the following block of lines:
    1st one:
    ==============================
    Traitement du type d'objet DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
    ORA-39083: Echec de la création du type d'objet PROCACT_SYSTEM avec erreur :
    ORA-01843: ce n'est pas un mois valide
    SQL en échec :
    BEGIN
    if (system.wm$_check_install) then
    return ;
    end if ;
    begin execute immediate 'insert into wmsys.wm$workspaces_table
    values(''LIVE'',
    ''194'',
    ''-1'',
    2nd one at the end of the import process:
    ==============================
    Traitement du type d'objet DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    ORA-39083: Echec de la création du type d'objet PROCACT_SCHEMA avec erreur :
    ORA-20000: Workspace Manager Not Properly Installed
    SQL en échec :
    BEGIN
    declare
    compile_exception EXCEPTION;
    PRAGMA EXCEPTION_INIT(compile_exception, -24344);
    begin
    if (system.wm$_check_install) then
    return ;
    end if ;
    execute immediate 'alter package wmsys.ltadm compile';
    execute immediate 'alter packag
    The target operating system is in french... here is a raw translation:
    1st one:
    ==============================
    Treatment of object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
    ORA-39083: Object type creation failed PROCACT_SYSTEM with error :
    ORA-01843: invalid month
    failed SQL :
    BEGIN
    if (system.wm$_check_install) then
    return ;
    end if ;
    begin execute immediate 'insert into wmsys.wm$workspaces_table
    values(''LIVE'',
    ''194'',
    ''-1'',
    2nd one at the end of the import process:
    ==============================
    Treatment of object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    ORA-39083: Object type creation failed PROCACT_SCHEMA with error :
    ORA-20000: Workspace Manager Not Properly Installed
    failed SQL :
    BEGIN
    declare
    compile_exception EXCEPTION;
    PRAGMA EXCEPTION_INIT(compile_exception, -24344);
    begin
    if (system.wm$_check_install) then
    return ;
    end if ;
    execute immediate 'alter package wmsys.ltadm compile';
    execute immediate 'alter packag
    By the way, the computer of the source database is Vista 64 in english and the destination computer is Vista 64 in french, do you think this has anything to do with these error messages? The parameters of the NLS_SESSION_PARAMETERS view are the same in the source and target database...
    Thanks in advance for your help!
    Joel Autotte
    Lead Developer
    Edited by: 871007 on Jul 13, 2011 7:31 AM

    I tried to import the full database dump with the "imp" command and I had the same error message with more detail:
    . Import d'objets SYSTEM dans SYSTEM
    IMP-00017: Echec de l'instruction suivante avec erreur ORACLE 1843 :
    "BEGIN "
    "if (system.wm$_check_install) then"
    " return ;"
    " end if ;"
    " begin execute immediate 'insert into wmsys.wm$workspaces_ta"
    "ble"
    " values(''LIVE'',"
    " ''194'',"
    " ''-1'',"
    " ''SYS'',"
    " *to_date(''09-DEC-2009 00:00:00'', ''DD-MON-YYYY HH"*
    *"24:MI:SS''),"*
    " ''0'',"
    " ''UNLOCKED'',"
    " ''0'',"
    " ''0'',"
    " ''37'',"
    " ''CRS_ALLNONCR'',"
    " to_date(''06-APR-2011 14:24:57'', ''DD-MON-YYYY HH"
    "24:MI:SS''),"
    " ''0'',"
    " '''')'; end;"
    "COMMIT; END;"
    IMP-00003: Erreur ORACLE 1843 rencontrée
    ORA-01843: ce n'est pas un mois valide
    ORA-06512: à ligne 5
    The call to the TO_DATE function with the string of the date format is incompatible with the format and language set in the NLS parameters of my database. Since I know that the "imp" and "exp" commands work client side (unlike "impdp" and "expdp"), I did a full export of the source database from the computer on which I try to import the dump to have the same language exported in the dump and I was able to import the dump successfully.
    But, I would like to use the Data Pump tool instead of the old import and export tools. How would it be possible to set the NLS parameters that "impdp" must use before to launch the import of the dump?
    The NLS parameters are set to the "AMERICAN" settings in both source and destination database when I validate that with the "NLS_DATABASE_PARAMETERS" view. The dump I did with "expdp" on the source database exported the data with the "AMERICAN" settings as well...
    On the destination database, the computer language is in french and I guess it is considered by the Data Pump import tool since it outputs the log in the language of the computer and it tries to do a TO_DATE with the wrong date format because it works with the "CANADIAN FRENCH" settings...
    Any suggestions? I'll continue to read on my side and re-post if I find the solution.
    Thanks!
    Edited by: 871007 on Jul 19, 2011 8:42 AM
    Edited by: 871007 on Jul 19, 2011 8:43 AM
    Edited by: 871007 on Jul 19, 2011 8:43 AM

Maybe you are looking for

  • Firefox has been terminated because it is deemed harmful by active virus control

    My firefox worked perfectly well until today when i launched firefox and encountered this problem. When i start firefox, i get an error report that firefox has been terminated because it is deemed harmful by active virus control. Please i need a way

  • Cant install itunes 10.5

    why i cant install itunes 10.5 on my alienware windows 7 64-bit? ive been trying for 3 days now,still it says error on windows installer package. can anyone help me??

  • Addition of config keys for transaction CT04

    For transaction code CT04 for a material for which i have output types ZCRE and ZCHG under variant config we have configuration value and configuration description. i would like to add additional config keys like configuration key part3 and configura

  • Pen Drive (HP V-220 W 8 GB}

     {1}    sir i am a problem my hp pen drive v220w is a mini 15 sec is a attech to pc  to hhiting  full    {2}        first time in a attech to pc  sms your fasterd pen drive first time and never not sms

  • Website bug, 404 errors with content

    Seems part of the Apple website have broken content (some are images others are videos) On AirPort Base Station Firmware Update 7.7.3, the main image is missing  and on http://store.apple.com/us/product/HFQQ2ZM/A/adidas-micoach-smart-ball the video i