Communicators will properly be extinct

Nokia N900 specifications
Maemo 5 OS
3.5-inch  (800 x 480) resistive touchscreen display
Qwerty keyboard
Mozilla-based browser, full Adobe Flash support
ARM cortex A8 processor
32Gb internal memory
5.0 MP Carl Zeiss camera with dual-LED flash, auto-focus and sliding cover
MicroSDHC support up to 16GB
FM transmitter
Quadband GSM/GPRS/EDGE, WCDMA 900/1700/2100
Wi-Fi
Bluetooth with A2DP
GPS
1320 mAh battery
110.9 × 59.8 × 18mm
181g
I sincerely don't think future communicators can compete with features on N900, so it's not surprising E90 is the last of it's kind.

You might be right. However, the keyboard on N900 is nowhere near as complete as the keyboard on E90. Thus, E90 is still way better on texting and messaging that N900 and N97.
2110i, 6150, 6210, 6310i, 6670, 9300, 9300i, E90, E72, HTC Touch Pro2, Samsung Galaxy S, Samsung Galaxy S II

Similar Messages

  • Why do my songs and audio books keep getting erased when I add a new book?  Now some books will not play and the iPod does not keep track of where I was when I pause a book, leave for music and then go back!

    About a month ago I added some new audio books to my iPod Classic and then after disconnecting, I discovered EVERYTHING had been erased. 
    I had to restore through my iTunes but then I could not load my music back on.
    I uninstalled / reinstalled iTunes and completely reformated the iPod drive.
    I successfully got my music and books back on but then added a new book two days ago only to discover everything got erased AGAIN!!!
    Of course, my 1 Year Warranty was up about a week before the first incident.....
    I formatted the iPod again, uninstalled / reinstalled iTunes (it was freezing during sync) and downloaded all my music and books again.
    Now I have several books that will not play (they play on iTunes and they played on the iPod before all these events) and the iPod does not keep track of where I was in a book when I have to stop and go to something else.
    Does anyone know what is going on with this thing???  I have a 5th Gen Video and in all the years I've had it, I've NEVER had problems like this! 
    Does anyone have any ideas on what I can do to get my iPod Classic up and running properly again?
    Thanks.

    When iTunes/iPod sync process failed due to timeout, the iPod, has only the initilaised  filesytem structure at the start of the Sync.
    The timeout failure could be due to
    Bad hardisk - do the iPod disk diagnostic, refer to this excellent post by tt2
    Slow USB port or resource - Dont use any USB hub, disconnect all other USB devices while syncing
    Timeout due to Antivirus or other plugins - disconnect from Internet and stop the Antivirus or monitoribng software if you are syncing.
    Preferably stop doing other things while syncing this ancient device, which the latest iTunes designer, think will soon be extinct.
    Have a nice day!

  • Bridge CS4 will not update to Camera Raw 5.7 on one of my computers.

    Bridge (CS4) will properly display my Nikon NEFs (from my Nikon D3 & D3S) on my old Mac Pro (Lion 10.75) but only the NEFs from the D3 are displayed on the exact same program on my MacBook Pro Retina - Mavericks).
    I have followed Adobe's instructions perfectly, over and over again, but no matter what I try (and I did try some variations) I cannot update from Camera RAW Preferences.Version 5.0.0.178 on the MacBookPro Retina.
    The Camera RAW preferences Version on the older Mac Pro computer's identical copy of Bridge CS4 is Version 5.7.0.213.
    I tried opening and installing a download of Camera_Raw_5_7.dmg from Adobe and I also tried dropping in the Camera Raw plugin file from the older computer but nothing works. I have emptied the cache several times.
    I dont know if this is significant but when I first started trying to do this, I could not find the Plug-Ins folder in the Adobe file on the MacBook Pro Retina, so I copied the one from the Mac Pro's Adobe folder.
    Please help. I can only use Bridge on my older desktop computer because of this problem.
    Thank you.
    Jan

    Well, Jan, over the years I've helped many Mac users with this precise problem, and I even posted screen shots showing the path to the correct plug-in folder for ACR.  However, in your case all those images won't help and I won't be able to generate new ones for you for one simple reason: I do not use Mavericks and never will because my Mac Pro desktops cannot go above Lion, FORTUNATELY. Mavericks is still cr@p at this point.
    All I can do is try to clarify a few misconceptions for you:
    AutoMatters wrote:
    …I read that about the root level library when I first tried to follow the instructions from Adobe and I thought that I was doing exactly that. Instead of the library associated with my name, I found the library associated with my hard drive. I suspect that I have not yet even seen the root level library…
    Jan, that is the ROOT LEVEL Library.
    Getting back to your advice, I do not understand what you mean by mounting the .dmg file as if it were a DVD or external hard drive.
    I mean just that.  A Disk Image file (.dmg) is a virtual disk, like a virtual DVD.  You double click on that file in the Finder and it will then mount just as if you had inserted a DVD into your computer.  It will then show you the contents it carries.  I will not be at my computers for a few weeks, so I can't be sure about all that the Disk Image for that update shows; but somewhere in there will be the ACR Plug-in.  That's what you need to copy from the Disk Image to your hard drive and install it in its correct location, NOT THE BLOODY .dmg file, for goodness sakes!
    Do you understand that once you are at the ROOT LEVEL Library you DO NOT go to the Plug-ins folder therein?  Read the instructions again.  What you want is to continue the path through the Application Support folder /Adobe / Plug-ins / etc.
    This old screen shot may help. Command or Control-Click on the thumbnail to see the full image in a new window, then scroll horizontally; it is a VERY wide image:
    Also, if I delete the Plug-ins folder that I copied from my other computer's installation of Bridge CS4. Should I reinstall the entire CS4 suite from the original disks and start all over again. If so, I read somewhere that I will need to somehow deactivate my serial number from the products as installed on my MacBook Pro so that I may reinstall it.
    Delete anything you may have copied yourself only if you are 100% sure what that is. But DO NOT REINSTALL ANYTHING without first uninstalling using the Adobe UNINSTALLER that was installed in your Utilities folder inside your Applications folder.
    Whatever possessed you to go around copying folders from a different Mac OS version? ???  That is insane! Geez…
    Why the heck do you keep talking about Bridge? ??? !!!  Bridge has absolutely NOTHING to do with the installation of ACR.  NOTHING!
    ACR is installed at the OS level and can be hosted indistinctly by Photoshop or by Bridge from the same location.
    Bridge is totally out of the loop in this discussion, just as it is when you are using ACR hosted by Bridge.  All Bridge does is call upon the appropriate application, whether this be Photoshop, Adobe Camera Raw (ACR), Illustrator or even Microsoft Word and hand it the file, period.  You could use ACR for years without ever launching Bridge, even without having Bridge installed. Ever.
    Irgendwie vermute ich, daß Deutsch Deine Muttersprache ist. Oder irre ich mich?

  • When will a Lightning update that works with Thunderbird v31.0 beta be released?

    I was using Thunderbird v 30 beta with Lightning 3.2b1 and it worked great. Thunderbird v31 was issued, and automatically updated. Add-ons are set for "automatic update". Everything in calendar is now grayed-out, and nothing works. I searched the Lightning downloads and found that Lightning 3.2b1 is the latest version offered, and that Lightning 3.2b1 states it is for "Thunderbird 30.0 and later". Obviously, Lightning 3.2b1 doesn't work with Thunderbird v31.
    Two things:
    1. I recommend that Lightning updates be released prior to the release of Thunderbird beta updates so that your testers who utilize the Lightning add-on have the full capability they had prior to the Thunderbird update(s), and
    2. There should a) be a notice sent with updated beta versions of Thunderbird that states whether or not they will properly interface with the currently-installed version of Lightning, and/or b) the update should not be allowed to automatically update if it detects an installed Lightning add-on with which it is incompatible and require the user to decide whether or not the user wants to proceed with the update and lose functionality.

    Steven is right: while it is appreciated that there is a workaround, the current procedure - upgrade TBird, restart TBird, see that Calendar is broken, think of the right search keywords, find a solution online, download, install, restart TBird - leaves a lot to be desired, to say the least. And this reocurrs every few/several months.

  • If I install Leopard 10.5 will I lose my Os 9.2.2?

    Hi,
    I have a 466 Mhz Power Pc G4, 1.5 GB of SDRAM..running 10.4.11. I love it!
    1) What I am faced with is losing my OS 9.2.2 if I install Leopard 10.5..I have Dreamweaver Ultra Dev for OS 9 and I use it everyday for my websites, will I lose my Os9?
    2) I am on a fixed income (on purpose) so I know I will never buy an Intel Mac..sad but true I guess. It seems my PPC just won't go the distance with the internet, with no flash support or any new apps or software. I am still seeing flash videos on my Firefox 3.6.28 since I upgraded my Adobe flash to the highest they would offer Flash Player 10.1.102.64 , but I'm sure someday my version of Firefox will just go extinct.I'm using Adblock on Firefox to save memory and I downloaded TenForFox- My Safari is really doing well also.
    How many tweaks can I keep doing to keep up..the alternatives are quickly vanishing. Shame on Apple for doing this to us loyalists.
    Should I migrate to a Linux system? And just learn a new website editor program? like Kompozer?
    Thanks-LionInSunHeart-Arizona

    Mac OS 9.02 emulation in Snow Leopard and Lion: Sheep Shaver
    SheepShaver requires a Classic ROM and quite a bit of setup, but there is now a fully contained version, Chubby Bunny, available: Google the term "Classic-On-Intel v 4.0.1 chubby bunny"
    Here is some information on SheepShaver:
    http://www.emaculation.com/doku.php/...mac_os_x_setup
    and
    http://www.everymac.com/mac-answers/...ntel-macs.html

  • Time Machine does not restore Aperture library properly

    After considerable testing, I have found 2 issues. (1) When the TM Restore takes place, it restores the contents of the Aperture package, not the package. This can be remedied by creating a new library, opening its package, replacing them with the items restored, closing the window and opening that package. Works but not the way it should work. (2) Although TM appears to save the full library size (5 gb in my test), and I can see it in backup.backup, the restore only brings back the bare database (500 mb for me) and thus it has to regenerate all of the thumbnails, which can be very very time-consuming for a large database.
    Has anyone had a different experience and, if so, what did they do?

    Update upon further testing: If you select an Aperture library icon in the finder, enter Time Machine, go to a previous version and do a Restore, you get a dialog box asking if you want to keep the original or replace it. In my testing I have found that Replace works fine, but if you select Both (keep the original), Time Machine returns a folder of the Aperture library contents, not the package as described above. The folder resides next to the original library but cannot be used until it is put into a package. I think this is a bug.
    However, if you have the library in a folder and you restore that folder using the Both option, Time Machine will properly provide a new folder and a restored library in it in a package form.

  • WSUS - Parent server will not purge computers from the unassigned group after being moved on the downstream servers.

    I need some help please. I have read every forum I can think of and google for months and can not find a solution.
    I have 1 Primary WSUS Server and 3 Replica servers. Often times systems are placed into the unassigned group on the replica servers and then after a syncronization they will show up on the primary server in the unassigned group.
    The problem is that when I log into a replica server and move these computers to another group, they never get moved on the primary server. They will constantly remain on the primary server in the unassigned group. I know this is probably something simple that I have overlooked in the documentation, but believe me I have tried everything that I can come up with. Please help! :)

    > So when you move them into the appropriate group... this is not remembered on the server.
    It is... but only until the *client* (which believe that it is authoritative for its group memberships -- as a result of the policy configuation) tells the server otherwise. Then the server has the group membership as reported by the client machine.
    > I thought that when the client went to contact the server again and said I'm in this group (which happens to not exist on the server)
    If the *group* does not exist on the server, the client will be seen only in "All Computers". The group name reported by the client is still stored in the database as an attribute of the client, but because the console is not configured to display that group, the client computer does not appear in any group except "All Computers".
    > the server would then say, yes you are; however the admin has told me that you really belong to this other group.
    No, the server never assumes it is authoritative for anything coming from the client. The WUAgent is the authoritative source for everything in the WSUS environment (except Update Approvals <g>). If the WUAgent is not configured to use "policy-based groups", then the WUAgent will query the WSUS Server to determine which group(s) the client belongs to, and if the server is configured in Options | Computers to "Use the console... to assign groups", then it will answer with the currently stored group name(s). However, if the WUAgent is given that information via policy, then it doesn't ask for anything, it *tells* the server what the facts are, and those facts are used to update the WSUS database.
    > On the downstream server they remain in the appropriate groups after I move them.
    > Its only on the upstream server that I have this problem.
    There are several configuration parameters that could be contributing to this. We should start by verifying the correct settings for all of them.
    1. Options | Computers on the replica server is set to "Use group policy".
    2. Options | Computers on the upstream server is set to "Use group policy".
    3. The policy configured group exists on the upstream server.
    4. The policy configured group has replicated to the replica server.
    5. The policy has been properly applied to the client system.
    6. The client appears in the correct group(s) on the replica server, according to the group(s) configured in the policy.
    7. The replica server has synchronized successfully with the upstream server since the client system last *reported* to the replica.
    8. The upstream server has "Reporting Rollup" enabled.
    9. Sufficient time has passed to allow the upstream server to complete the asynchronous updating of the reporting data from the replica server.
    If all of the above is correct, and the client system still appears in "Unassigned Computers" on the upstream server, then we'll have something that needs further investigation. Most likely, though, is that one of those items enumerated is not copasetic and is interfering with the correct recording of the client system's correct group membership. Once corrected, the upstream server will properly reflect the actual group memberhips as configured in policy.
    Lawrence Garvin, M.S., MCITP:EA, MCDBA
    Principal/CTO, Onsite Technology Solutions, Houston, Texas
    Microsoft MVP - Software Distribution (2005-2009)

  • Will touch screen be the way of future communicato...

    Hi all, I recently read reviews on Nokia's newest & first touch screen 5800 XpressMusic handset with S60 5th edition,
    it appears the functions again out performs E90 communicator. Will there ever be a business device that could catch up?
    Solved!
    Go to Solution.

    Through the recent launch of the Nokia Xpress Music 5800, Nokia has not only introduced the Series 60 5th edition system but has also "arrived", as far as touch-screens are concerned. Yes, there was the now old 7710 which was the only device to run on the Series 90 platform -- but how many of those are still around? Nokia seems to have silently "retired" other platforms; intent on only the S60 as of now.
    It is inevitable that the Nokia communicators will soon get the touchscreen treatment as well. Indeed, at a recent webcast, Nokia dropped a teaser in the form of a touchscreen version of a future communicator bearing a strong family resemblance to the E90 and its predecessors.The device seems to be an all-touchscreen phone that converts into a Communicator when the 'pencil box' form-factor is opened.
    Sadly, there is no official word from the company regarding any such device.
    Articles posted courtesy engadget
    keep us updated about the progress.... if u like wat I have to offer then click on khudos.

  • Why does the option to watch a movie even appear if it then will not play because not downloaded? Worse, then only 23hrs left to watch!! Paying for what is not delivered!!

    ITUNES:  Why does the option to "watch movie now" appear & when chosen movie will not play because still downloading. WHY SHOULD THE OPTION APPEAR IF MOVIE CAN"T BE PLAYED. Then when movie is viewable, only 23hrs to watch! PAYING FOR MOVIES WITHOUT OPPORTUNITY TO VIEW does not make happy consumer. Emailed Itunes over and over about problem and stilll paying for services unrendered.

    When iTunes/iPod sync process failed due to timeout, the iPod, has only the initilaised  filesytem structure at the start of the Sync.
    The timeout failure could be due to
    Bad hardisk - do the iPod disk diagnostic, refer to this excellent post by tt2
    Slow USB port or resource - Dont use any USB hub, disconnect all other USB devices while syncing
    Timeout due to Antivirus or other plugins - disconnect from Internet and stop the Antivirus or monitoribng software if you are syncing.
    Preferably stop doing other things while syncing this ancient device, which the latest iTunes designer, think will soon be extinct.
    Have a nice day!

  • Will my Adobe programs migrate to a new computer

    My 6 year old iMac has finally bit the dust. It won't start under any circumstances.  Fortunately, I use TIme Machine so everything is backed up.
    But I am wondering if my Adobe programs, such as PS, Elements & Lightroom will properly migrate to my new computer using Time Machine?
    Or will I need to find the original installation disks (recently moved & can't find the PS one, and LR & Elements I loaded online)?

    coachgns, I've given this some serious thought.
    You would first have to boot off the recovery partition to restore from Time Machine and it would wipe your new OS and everything that came with it, iLife iWorks and so on.
    You would be replacing it with older OS software and apps that would probably not play well with the new hardware. I don't want you to have more problems than you need.
    For OS 10.8.5, legacy PPC apps are no longer supported.
    I've only done full restore to the same hardware. You are doing this to a nice new machine and if there is an Apple Store nearby, get an appointment with them on how to best migrate everything off Time Machine to your new Mac.
    As for Adobe products,only the Intel based ones can be reinstalled. Photoshop CS3 and later as an example.
    Gene

  • Memory speed on T400

    I have 2 ea. 2gb DDR3 PC-8500 1066Mhz memory modules installed in my T400. If I check the modules under Everest Ultimate 460 they read DDR2 running at 667 Mhz. If I use CPU-Z they also read DDR2 but don't show a speed. Has anyone else been able to check the speed of their modules and what program did you use? I don't see anything in the bios to cause the memory to run slower than rated. Do you think they are really running at 1066 MHz or do I have some type of compatibility problem. I get the same reading if I pull one of the modules out and check them one at a time.

    Short answer, yes, the DDR3 memory bus in your T400/T500 should show at 533mhz(+/- 1mhz)
    CPU-Z 1.51 will properly show as DDR3, previous versions may have incorrectly shown as DDR2 
    Here are your answers explianed in detail:
    http://en.wikipedia.org/wiki/DDR3_SDRAM
    T500 - P8600 2.4Ghz Core 2 Duo, Modded - 4GB Patriot DDR3 and 320GB Caviar Black 7200rpm drive with Ati HD3650 and Catalyst 9.6 modded drivers - Vista Business 32bit stripped down to bare bones with VM's from Ubuntu to Win7RC1 64-bit.

  • Is there a way of stopping the kernel from trying to load a device???

    Hi,
    Basically I have a Nvidia 7950GX2 that i have installed in a Mac Pro to use in windows and it works perfectly but when i try to boot into OSX i get kernel panics.... I still have my 7300 in the machine for OSX. when I try this methos with a 7800 gtx it works just fine.....
    Is there a way i can stop OSX trying to load the device on boot? disable it
    Thanks in Advance...................

    Video cards that do not have Apple ROMs cannot boot an EFI based machine like the Mac Pro.  Currently the only video cards that will properly work in the Mac Pro are the three available from Apple.  You can read more about this in the Mac Pro forums by doing some searching and browsing because the topic has been widely discussed here.  You will also find more at www.xlr8yourmac.co, www.barefeats.com, and the review of the Mac Pro at www.anandtech.com.

  • ORACLE SERVER AND UNIX TP MONITOR-2

    제품 : ORACLE SERVER
    작성날짜 : 1995-01-24
    Subject: Oracle Server and UNIX Transaction Processing Monitors-2
    Page(3/4)
    This file contains commonly asked questions about Oracle7 Server and UNIX
    Transaction Processing Monitors (TPMs). The topics covered in this article are
         o Oracle Parallel Server and TP Monitors
         o Oracle and DCE-based TP Monitors
         o Other commonly asked questions
    The questions answered in part 3 provide additional detail to the information
    provided in part 1.
    Oracle Parallel Server and TP Monitors
    ======================================
    How does Oracle Parallel Server (OPS) work with TP Monitors?
    If you are using Oracle-managed transactions, there are no special
    considerations. But if you are using TPM-managed transactions, and
    thus need to use the XA interface, then Oracle requires release 7.1.3
    or later and a special version of the Distributed Lock Manager, called
    the session-based lock manager. This version of the DLM is not yet
    available for all platforms. To understand this restriction, let's take
    a look at one of the technical details of XA.
    The XA specification requires that the Resource Manager be able to
    move a transaction from one process to another, and even to be
    able to commit in a separate process. In Oracle, transactions are
    attached to sessions, so that means that we also have to be able to
    move sessions. Therefore, the session/transaction can't have any state
    which is tied to a particular process. The first generation distributed
    lock managers were all built to use the process id as the lock owner,
    which doesn't work for locks which need to move with the transaction.
    Oracle and DCE-based TP Monitors
    ================================
    How does Oracle interface to the Encina TP monitor? To CICS/6000? I've
    heard that they require OSF DCE facilities in order to run?
    Oracle interfaces to Encina and CICS/6000 just as it does to any other
    TP Monitor. The TP Monitor issues XA commands to control transactions, and
    Oracle executes the commands. Encina and CICS/6000 do use DCE features for
    their own operation. However, this use is transparent to the Oracle Server.
    What DCE facilities can Oracle products take advantage of when working with
    a DCE-based TP Monitor?
    The two most commonly mentioned DCE features which might be useful
    to Oracle users are multi-threading and security. We look at these in
    the subsequent questions in this section.
    Encina documentation suggests that a Resource Manager such as Oracle can
    be either single-threaded or multi-threaded? Which way is Oracle XA
    implemented?
    The Oracle XA implementation is single-threaded, as is any Oracle client.
    Within a single process, at most one thread can access Oracle at a time.
    Does that mean that only a single Encina application can access an instance
    of Oracle transactionally at any given moment?
    No. Oracle XA is only single-threaded within a single application server
    process. Multiple applications can access Oracle simultaneously using XA
    by using different application processes. Encina allows
    (1) serial reuse of a single server by different clients. There are
    two options for this. The server can use long term reservation
    but be defined to be in shared or concurrent access mode, which
    allows the server to be used by another client as soon as an RPC
    completes. Alternatively, the server can use default reservation
    and exclusive mode, which allows the server to be used by another
    client as soon as the current transaction ends.
    (2) concurrent execution by multiple servers, even if they are accessing
    the same Oracle database. These may be executing the same or different
    procedures.
    These two features should let you get as much concurrency as you need.
    Why isn't the Oracle XA library multi-threaded?
    The XA specification specifically states that its use of the phrase
    "thread of control" means a process. If an RM were to multi-thread its
    XA, it would be in violation of the specification. This restriction
    was put place in because at the time the specification was written,
    there were numerous thread packages: if the TM used one, the application
    another, and perhaps the RM yet a third, there's no way it could work.
    As threads standards settle down, the later versions of XA will probably
    relax this restriction.
    Will Oracle change if the XA specification changes?
    Very likely. The exact time frame will of course depend on the priority of
    all work items at that time.
    Does Oracle use DCE security via the TP Monitors?
    The integrity of the connection between a DCE TP Monitor client and DCE
    TP Monitor server is protected by the DCE security functionality.
    Theoretically, the TP Monitor could make the DCE-protected client security
    information available to Oracle. Unfortunately, there's no standard way
    for a TP Monitor to pass security information information to a Resource
    Manager such as Oracle. Oracle is leading an effort to extend the X/Open
    model to allow use of the security information provided by the Monitor.
    In the meantime, the basic DCE security features such as encryption are
    useful within TP Monitors.
    Effective use of DCE security would normally also mean that the security of
    the TP Monitor client be passed through the TP Monitor, through the Oracle
    client (application server), to the Oracle Server, and possibly on
    to other Oracle Servers through database links. The ability to transfer
    security information to other processes, called delegation, is missing
    in DCE version 1.0. DCE version 1.1, expected to emerge in late 1994,
    has some delegation features. Oracle is examining these features to see
    how they might be used.
    Are there any special considerations for CICS/6000?
    There are two:
    (1) It is inefficient to run without XA. CICS/6000 is designed to
    use XA. It uses XA so that the CICS server can log on to Oracle
    when it starts, after which it makes that Oracle connection available
    to any transaction it executes. If you don't use XA, the CICS server
    does not itself log on to Oracle so each transaction has to log on
    and log off - a very expensive mode of operation. Also, it is very
    un-cics-like in that the application does the log{on,off} and also
    commits - in a mainframe CICS database program CICS would implicitly
    do these operations. Oracle does not recommend this mode because of the
    performance penalty.
    (2) CICS servers are generic and dynamically load application modules.
    In order for these modules to access the Oracle connection made by
    CICS, the applications must be built with a shared object version of
    the Oracle libraries. This is an installation option on platforms which
    support CICS/6000 and other products using its architecture such as
    CICS 9000.
    Other commonly asked questions
    ==============================
    What other Resource Managers can be included in an Oracle XA transaction?
    Several other relational database vendors have an XA implementation
    available or in progress. There is an XA C-ISAM product from
    Gresham Telecomputing. There are also Resource Managers contained
    within some of the TP Monitors which can be coordinated in the same
    transaction. For example, CICS/6000 has VSAM files and other data
    stores, Encina has its RQS queuing system, and Tuxedo has its /Q queuing
    system.
    What is Recoverable Queuing Service (RQS) and how does it interoperate with
    Oracle7 and Encina? What about /Q?
    Recoverable Queuing Service is a feature provided by Encina which allows
    transactional, distributed queuing (enqueue/dequeue). Tuxedo has a similar
    product called /Q. Because these products are themselves coordinated by the
    TM component of the TP Monitor, their queue operations are atomically
    coordinated with with operations on XA Resource Managers such as Oracle7
    Server. That is, they can atomically put something on one of their queues
    and commit an Oracle transaction, then at some later time dequeue an
    entry atomically with doing some other Oracle transaction. The queue
    system guarantees that the message will not be lost or transmitted twice.
    Can I mix TP Monitor applications with standard Oracle7 Server applications?
    Yes, you can have existing Oracle applications connected to the database
    with alongside TPM applications against the same database. The TPM does
    not manage the whole database, just those transactions which are started
    by the TPM. The Oracle Server will properly handle concurrency control
    between the transactions managed by itself and those managed by the TPM.
    Is Oracle planning to change its tools to be more suitable for TP Monitors?
    With Oracle Procedure Builder 1.5, to be available with CDE2,
    Oracle will provide a foreign function interface that allows you to
    dynamically set up PL/SQL calls that access C functions. In other
    words, you can access C routines in Windows DLLs from within your
    PL/SQL procedures. This will allow PL/SQL under Windows easy access to
    TP Monitor APIs.
    Does Oracle7 Server itself use XA-compliant TPMs as the interface to
    foreign RMs?
    No, for this purpose Oracle Server uses the SQL*Connect products or the new
    Transparent and Procedural Gateway products.
    Does Oracle7 Server use XA to coordinate Oracle7-only distributed
    transactions?
    No, it uses an internal mechanism.
    Can database links be used with XA?
    If an Oracle7 database is running under XA, it can access other Oracle7
    databases through database links, with some restrictions. First, the
    access to the other database must use SQL*Net V2 and be running MTS.
    Second, it must currently be to another Oracle7 database. Assuming those
    restrictions, the Oracle 7 database can do distributed update to another
    Oracle 7 database by using a database link, whether it is started by an
    Oracle application or a TP Monitor application. The TPM will see Oracle
    as only a single RM, but Oracle7 will propagate all the transaction
    commands to the other database, including the two-phase commit. If
    the transaction is started by a TP Monitor application and is using XA,
    it can also update non-Oracle resources managed by the TPM. If it
    is started from an Oracle application, it can only include resources
    managed by Oracle.
    Here's a sample configuration:
    | TPM | | TPM |
    | client | | client |
    | |
    | |
    | TPM |
    | |
    | |
    | Oracle | Forms, Forms, | Oracle | | non-XA | | XA |
    | client | Plus, Plus, | client | | TPM | | TPM |
    --------- Pro, Pro, --------- | server | | server |
    | Financials, Financials, | |(note 1)| ----------
    | etc. etc. | ---------- |
    | | | |
    | SQL | SQL | SQL | XA
    | commit | commit | commit | commit
    | | | |
    | Oracle | | Oracle | | Oracle | | Oracle |
    | server | | server | | server | | server |
    | | | |
    | | | |
    | | | |
    | Database 1 | | Database 2 |
    | | | |
    | A | A
    | | dblink to database 1 | |
    | ------------------------------------ |
    | |
    dblink to database 2
    Note 1: Oracle will work having both XA and non-XA servers but some TPMs
    may have restrictions on this.
    Are multiple direct connections possible from a Pro* program?
    Using XA, you can not only specify multiple direct connections to Oracle7
    databases, you can also update them both in the SAME transaction. The
    way to do this is to use a precompiler feature called a named database.
    When you use a named database, you qualify the SQL statement with the
    database name. For example, you write EXEC SQL AT dbname UPDATE emp ....
    We have a complementary feature in the xa open string to let the user
    associate the name with a particular RM instance, called the DB clause.
    You will also want to use the SqlNet clause in the open string so you
    can give the two different SIDs. This clause does not require the use of
    the SQL*Net product, it is just a naming convention. For more information,
    see Oracle7 Server for UNIX Administrator's Reference Guide.
    Some TP Monitors may not support having multiple Resource Mangers in the
    same server; check with the TPM vendor.
    Is there any collateral available for XA or TP Monitors?
    Oracle At Work 52684.0692
    Oracle7 Server for UNIX Administrator's #A10324-1
    Reference Guide
    Guide to Oracle's Products and Services #A10560
    Oracle7 Server and CICS/6000               #A14200
    Where can I get more information on the DTP model?
    X/Open's address is
    X/Open company Ltd (Publications)
    P O Box 109
    Penn
    High Wycombe
    Bucks HP10 8NP
    Tel: +44 (0)494 813844
    Fax: +44 (0)494 814989
    Request
    G307 Distributed Transaction Processing: Reference Model Version 2
    X/Open Guide G307 ISBN 1-859120-19-9 28cm.44p.pbk.220g.11/93
    Page(4/4)
    This file contains commonly asked questions about Oracle Server and UNIX
    Transaction Processing Monitors (TPMs). The topics covered in this article are
         o Performance with Oracle Server and TP monitors
         o Performance using Oracle's XA Library
    The questions answered in part 4 provide additional detail to the information
    provided in part 1.
    Performance with Oracle Server and TP Monitors
    ==============================================
    I have heard that Transaction Processing Monitors (TPMs) will increase
    Oracle Server performance. Is this true?
    Several hardware and TPM vendors have made the claim that TPMs
    will increase RDBMS performance. This claim is based on TPC-A
    benchmarks. The key point to understand about TPC-A is that it
    requires, for every transaction-per-second, ten times that many
    users to be connected. For example, to get 600 TPS, you need 6000
    users. The next question will answer in more detail how the the
    three-tier architecture addresses this requirement, but first let's
    look more generally at what TP Monitors can and can't do to improve
    performance.
    TP Monitors can provide better performance:
    (1) When there are more than several hundred users connected.
         This is because of the TP Monitor's role in the three-tier
         architecture, described in the next question. In this
         architecture, terminal handling is offloaded to one or more
         separate machines, freeing up those cycles to do database work.
         Note that this does NOT mean that Oracle itself runs faster,
         just that we've given it more CPU cycles to use.
    (2) When, because of the high potential concurrency of requests,
         significant resource contention exists. Use of a TP Monitor can
         limit the degree of concurrency and thus reduce contention.
    TP Monitors can not provide better performance:
    (1) For existing applications. The applications must be designed
         to fit the TP Monitor architecture.
    (2) For applications which are highly interactive in their use of
         the database. These applications put many messages
         through the transport system, and the TP Monitor is not as
         efficient as SQL*Net for point-to-point communication.
    (3) For CPU intensive single-query decision support. When executing
         a single large command, Oracle query facilities work efficiently,
         especially with the use of Oracle Parallel Query, available in 7.1.
    How does the three-tier solution help TPC-A, or other situations with
    thousands of on-line users?
    The TPC-A test calls for a large number of users to produce a given
    result. In the high-end results we produced in June, 1992, for example,
    6150 terminals were simulated to produce 618 TPC-A transactions.
    Thus, terminal concentration accounts for a large portion of the total
    processing time used.
    First, let's look at how the Multi Threaded Server would work for
    this benchmark. In this case, there are many client processes,
    but only a few server processes, which handle client requests on a
    first-come first serve basis. When they are done with a request,
    they take another client's request.
    ORACLE7 CLIENT/SERVER ARCHITECTURE WITH MULTI THREADED SERVER
    | Client | | Server |
    | __________ |______________|_____ _____________ _____________ |
    | | Client | | SQL*Net | |_|Dispatcher | | | |
    | | Process| | | ____| Process |___| | |
    | |________| | | | __|___________| | | |
    |____________| | | | | | | | |
    | | | | | | Oracle7 | |
    ______________ | | | __|__|____ | Server | |
    | Client | | | | __|_|_____ | | | |
    | __________ | | | | | Shared | |____| | |
    | | Client | | SQL*Net | | | | Server |_|____| | |
    | | Process|_|______________|__| | | Process|_| | | |
    | |________| | | | |________| |___________| |
    |____________| | | |
    | | |
    ______________ | | |
    | Client | | | |
    | __________ | | | |
    | | Client | | SQL*Net | | |
    | | Process|_|______________|____| |
    | |________| | | |
    |____________| | |
    |_______________________________________|
    Client processes = N Dispatcher processes >= 1
    Shared server processes >= 1
    If there are 500 clients in this environment, there will be one or more
    dispatcher processes, dynamically tunable, and one or more shared
    server processes, dynamically tunable, on the server. The reduction
    in the total number of processes handled by the server system
    results in more processing time available for RDBMS activity. Thus
    higher RDBMS transaction throughput can be obtained on the
    server system.
    But the problem for the TPC-A, and for certain large customer
    configurations, is not the only ability of the Oracle Server to
    process transactions, but also the ability of the operating
    system to handle huge numbers of incoming connections.
    There is one incoming connection for each client. Most UNIX
    operating systems have a limit on how many such connections they can
    handle. Even if a particular operating system allows a large number of
    connections, each takes some amount of overhead to manage.
    In order to service all 6150 terminals, we selected a 3-tier hardware
    environment where the middle tier, using a TPM, acted as a terminal
    concentrator. The high-end TPC-A architecture looked like the following.
    The Application Servers, which contain the Pro*C statements used to
    perform the transaction also run on the terminal concentrator machine
    in order to offload as much work from the database serve as possible.
    They send the compiled SQL over SQL*Net to the Oracle7 Server processes.
    ORACLE7 TPS-A CLIENT/SERVER ARCHITECTURE
    | Client | | Terminal | | Server |
    | ________ | | Concentrator | | |
    | | Client | |TPM | | | |
    | | Process|_|_____|__ _____ | | |
    | |________| |Comm | | | | | | |
    |____________| | | | | | | |
    | |__| | | | |
    ____________ | | TPM | | | |
    | Client | | ___| | _______ | | ________ _______ |
    | ________ | | | | |_| |__|_______|__| Oracle | | | |
    | | Client | |TPM | | | | |Appl. | |SQL*Net| | Server |__| | |
    | | Process|_|_____|_| |_____| |Server | | | | Process| | | |
    | |________| |Comm | |_______| | | |________| | | |
    |____________| | | | | | |
    |_______________________| | | | |
    | | | |
    ____________ _______________________ | |Oracle7| |
    | Client | | Terminal | | |Server | |
    | ________ | | Concentrator | | | | |
    | | Client | |TPM | | | | | |
    | | Process|_|_____|__ _____ | | __________ | | |
    | |________| |Comm | | | | _______ |SQL*Net| | Oracle | | | |
    |____________| | | | |_| |__|_______|__| Server |__| | |
    | |__| | |Appl. | | | | Process| | | |
    ____________ | | TPM | |Server | | | |________| |_______| |
    | Client | | ___| | |_______| | | |
    | ________ | | | | | | | |
    | | Client | |TPM | | | | | | |
    | | Process|_|_____|_| |_____| | | |
    | |________| |Comm | | | |
    |____________| | | | |
    |_______________________| |________________________|
    Clients = 6150 Terminal concentrators = 17
    TP Monitor instances = 17
    Application server processes Oracle Server processes
    = 17*8 = 17*8
    The TPM is the software component of the terminal concentrator. In this role
    it offloads terminal handling from the the machine running Oracle Server.
    Since more than one terminal concentrator can be configured, whereas the
    database in this case had to run on a single machine, concentrator machines
    can be added until the performance of the back-end machine was optimized.
    This three-tier solution resulted in the outstanding transaction throughput
    announced with Oracle7 Server. Even with Oracle Parallel Server, it may pay
    to offload the terminal handling so that the cluster can be exclusively used
    for database operations.
    Can you summarize the performance discussion for me?
    Depending on the number of users required, different architectures may be
    used in a client/server environment to maximize performance:
    1) For a small number of users, the traditional Oracle two-task
    architecture can be used. In this case, there is a one-to-one
    correspondence between client processes and server processes. It's
    simple, straightforward, and efficient.
    2) For a large number of users, Multi Threaded Server might be a better
    approach. Although some tuning may be required, Multi Threaded Server
    can handle a relatively large number of users for each machine size
    compared to the traditional Oracle approach. Using this approach,
    customers will be able to handle many hundreds of users on many
    platforms. Furthermore, current Oracle applications can move to this
    environment without change.
    3) For a very large number of users, where transactions are simple and
    terminal input concentration is the overriding performance issue, a
    3-tier architecture incorporating a TPM may be useful. In this case,
    terminal concentration is handled by the TPM in the middle tier. As
         you might expect, it is a more complex environment requiring more
         system management. For existing Oracle customers, significant Oracle
    application modifications will be required.
    Oracle provides all of these choices.
    Performance using Oracle's XA Library
    =====================================
    Are there any performance implications to using the XA library (in other
    words, to using TPM-managed transactions)?
    (1) The XA library imposes some performance penalty. You should use
    TPM-managed transactions only if you actually need them. Even if you
    are getting the one-phase commit optimization, the code path is
    longer because we need to map back and forth between external
    formats and internal ones. Also, prior to 7.1, XA requires you
    to release all cursors at the end of a transaction, which results
    in extra parsing. Even with shared cursors, there is time spent
    looking up the one you need and re-validating it. This has been
    improved for 7.1.
    (2) If you need to use two-phase commit, this will incur additional cost
    since extra I/Os are required. If you do need 2PC, you need to account
    for that when sizing the application.
    (3) Although some TPMs allow parallel execution of services (such as Tuxedo's
    "tpacall"), this will not normally enhance performance unless different
    resource managers are being used. In fact, Oracle Server must serialize
    accesses to the same transaction by the same Oracle instance, and the
    block/resume code will in fact degrade performance in that case compared
    to running the services sequentially.

    hello,
    the role is the same on all plattforms. the reports server takes requests for running reports, spawns an engine that executes the request. in addition to that, the server also provides scheduling services and security features for the reports environment.
    regards,
    the oracle reports team

  • Need help with Airport Express and so on.

    Ok so my main problem before getting into what I need help with here is that our MacBooks and now my iPhone 6 plus isn't staying online. Keep getting booted off and then I either have to select the network again or it will eventually go back on. If anyone has a solution or so please feel free to answer that as well. I'm running on Roadrunner with a Netgear 600 wireless router and a motorola modem. Both of which I'll leave the link to below for a better look.
    My Main Question: So I'm looking at a new wireless router mainly and possibly a new modem. I know Apple products are trustworthy but how good is the Airport Express and other Airport products. Also what is the Maximum speed and Maximum data speed for the cheapest express station and if anyone knows the speeds of the other devices it would be greatly appreciated.

    DSL Router to Netgear 5-port Switch and I used the switch to connect to Airport Extreme, TV, Blue-Ray DVD player and DirecTV Receiver.
    The AirPort Extreme base station (AEBS) is a router so it will do what you need.
    You need to reconfigure your setup. Connect the WAN port of the AEBS to the DSL router. Then connect the Netgear switch to one of the LAN ports on the AEBS. The AEBS will properly share the connection.

  • How can I use "date modified" from metadata to date iphoto events?

    I am very frustrated trying to move photos from windows to iMac.  I have a number of photos, all of them have some metadata with date created and date modified.  This worked well in Picasa in Windows and kept everything in order.  Now it seems iphoto will not read date modified or assigns date created (if there is no date in the metadata) as the date everything was switched over.  I looked over some of the wrongly organized photos (there are thousands) and realized that under the old sustem, Picasa in windows must have used either date created or date modified, which ever came earlier or perhaps whichever was available.  In iphoto, it doesn't read "date modified" or does this crazy thing where even if the modification date is before the creation date, puts all photos in the order or when they were brought over from windows.  I have a lot of photos dated September 18 -- thousands.
    Is there any way to get either iphoto or some other program (I'm not terribly impressed by iphoto) that will properly organize the photos in an imac environment. In other words, I want a program to look at (1) creation date or (2) modification date (which-ever is earlier).
    Even Picasa is now confused (in the mac envirnoment). 
    Thanks,
    Mark

    No, as this is file metadata not photo metadata.
    You can adjust the date and time using the Photos -> Adjust Date and Time or even the Photos -> Batch Change command.
    Regards
    TD

Maybe you are looking for

  • Hp 110-a04 won't display video when my 5670 is plugged in.

    So here's the dealio. Bought a myself an hp 110-a04, threw in the sapphire 5670, doesn't display anything and I get a 6 long beeps beep code. Kill me. So, I take out the original PSU (it's only 180 watts), throw in a 400 watt power supply,  toss in t

  • Document not display in FB03 ?

    HI All Experts, One document posted on 09.07.2009 is not able to display in FB03. However I am not able to know that whether that document is archived or deleted ? So any one can tell me how would I know that what happened to document ? How to see th

  • Confirmation for PO created in backend ECC

    Hi, Can we do confirmation in EBP for PO's that are created directly in backend ECC. We are on classic scenario using EBP 4.0 -Sharan

  • SBS2011 Network Reports failing to email

    Hello all, I have a number of SBS2011 machines that are failing to send the daily network reports. I've done some troubleshooting already and cannot resolve it, thus this post. Neither the scheduled report nor the "generate and e-mail report" method

  • Missing Launchpad icons

    Since upgrading to Maverick some of my application icons are missing from Launchpad. The name is there but not the icon. Does anyone know why and how to fix it.