Question on best practice to extend schema

We have a requirement to extend the directory schema. I wanted to know what is the standard practice adopted
1) Is it good practice to manually create an LDIF so that this can be run on every deployment machine at every stage?
2) Or should the schema be created through the console the first time and the LDIF file from this machine copied over to the schema directory of the target server ?
3) Should the custom schema be appended to the 99user.ldif file or is it better to keep it in a separate LDIF ?
Any info would be helpful.
Thanks
Mamta

I would say it's best to create your own schema file. Call it 60yourname.ldif and place it in the schema directory. This makes it easy to keep track of your schema in a change control system (e.g. CVS). The only problem with this is that schema replication will not work - you have to manually copy the file to every server instance.
If you create the schema through the console, schema replication will occur - schema replication only happens when schema is added over LDAP. The schema is written to the 99user.ldif file. If you choose this method, make sure you save a copy of the schema you create in your change control system so you won't lose it.

Similar Messages

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Architecture/Design Question with best practices ?

    Architecture/Design Question with best practices ?
    Should I have separate webserver, weblogic for application and for IAM ?
    If yes than how this both will communicate, for example should I have webgate at both the server which will communicate each other?
    Any reference which help in deciding how to design and if I have separate weblogic one for application and one for IAM than how session management will occur etc
    How is general design happens in IAM Project ?
    Help Appreciated.

    The standard answer: it depends!
    From a technical point of view, it sounds better to use the same "midleware infrastructure", BUT then the challenge is to find the lastest weblogic version that is certified by both the IAM applications and the enterprise applications. This will pull down the version of weblogic, since the IAM application stack is certified with older version of weblogic.
    From a security point of view (access, availability): do you have the same security policy for the enterprise applications and the IAM applications (component of your security architecture)?
    From a organisation point of view: who is the owner of weblogic, enterprise applications and IAM applications. In one of my customer, application and infrastructure/security are in to different departments. Having a common weblogic domain didn't feet in the organization.
    My short answer would be: keep it separated, this will save you a lot of technical and political challenges.
    Didier.

  • What are the best practices to extend the overall lifespan of my MacBook Pro and its battery?

    In general what are the recomended practices to extend the lifespan of my batter and other general practice to extend the lifespan and characteristics(such as performance and speed) like new on my MacBook Pro which this past fall (2011)?

    About Batteries in Modern Apple Laptops
    Apple - Batteries - Notebooks
    Extending the Life of Your Laptop Battery
    Apple - Batteries
    Determining Battery Cycle Count
    Calibrating your computer's battery for best performance
    MacBook and MacBook Pro- Mac reduces processor speed when battery is removed while operating from an A-C adaptor
    Battery University
    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility.  For situations DU cannot handle the best third-party utilities are: Disk Warrior;  DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. Drive Genius provides additional tools not found in Disk Warrior.  Versions 1.5.1 and later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep.  Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced since Tiger.  These utilities have limited or no functionality with Snow Leopard or Lion and should not be installed.
    OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems. For more about malware see Macintosh Virus Guide.
    I would also recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc.
    For emergency repairs install the freeware utility Applejack.  If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line.  Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. There is no confirmation that this version also works with Lion.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    Carbon Copy Cloner
    Data Backup
    Deja Vu
    SuperDuper!
    SyncTwoFolders
    Synk Pro
    Synk Standard
    Tri-Backup
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at CNet Downloads or MacUpdate.
    Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity as free space.

  • New to ColdFusion - Question regarding best practice

    Hello there.
    I have been programming in Java/C#/PHP for the past two years or so, and as of late have really taken a liking to ColdFusion.
    The question that I have is around the actual seperation of code and if there are any best practices that are preached using this language. While I was learning Java, I was taught that it's best to have several layers in your code; example: Front end (JSPs or ASP) -> Business Objects -> Daos -> Database. All of the code that I have written using these three languages have followed this simple structure, for the most part.
    As I dive deeper into ColdFusion, most of the examples that I have seen from vetrans of this language don't really incorporate much seperation. And I'm not referring to the simple "here's what this function does" type of examples online where most of the code is written in one file. I've been able to see projects that have been created with this language.
    I work with a couple developers who have been writing in ColdFusion for a few years and posed this question to them as well. Their response was something to the affect of, "I'm not sure if there are any best practices for this, but it doesn't really seem like there's much of an issue making calls like this".
    I have searched online for any type of best practices or discussions around this and haven't seen much of anything.
    I do still consider myself somewhat of a noobling when it comes to programming, but matters of best practice are important to me for any language that I learn more about.
    Thanks for the help.

    Frameworks for Web Applications can require alot of overhead, more than you might normally need programming ColdFusion, I have worked with Frameworks, including Fusebox, what I discovered is when handing a project over to a different developer, it took them over a month before they were able to fully understand the Fusebox framework and then program it comfortably. I decided to not use Fusebox on other projects for this reason.
    For maintainability sometimes its better to not use a framework, while there are a number of ColdFusion developers, those that know the Fusebox framework are in the minority. When using a framework, you always have to consider the amount of time to learn it and succesfuly implement it. Alot of it depends on how much of your code you want to reuse. One thing you have to consider, is if you need to make a change to the web application, how many files will you have to modify? Sometimes its more files with a framework than if you just write code without a framework.
    While working on a website for Electronic Component sourcing, I encountered this dynamic several times.
    Michael G. Workman
    [email protected]
    http://www.usbid.com
    http://ic.locate-ic.com

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Best Practice for Replicating Schema Changes

    Hi,
    We manage several merge replication topologies (each topology has a single publisher/distributor with several pull subscriptions, all servers/subscribers are SQL Server 2008 R2).  When we have a need to perform schema changes in support of pending software
    upgrades we do the following:
    a) Have all subscribers synchronize to ensure there are no unsynchronized changes present in the topology at the time of schema update,
    b) Make full copy-only backup of distribution and publication databases,
    c) Execute snapshot agent,
    d) Execute schema change script(s) on publisher (*) when c and d are reversed this has caused issues with changes to view definitions which has resulted in us having to reinitialize subscriptions,
    e) Have subscribers synchronize again to receive schema updates.
    Each topology has it's own quirks in terms of subscriber availability and consequently the best time to perform such updates.
    The above process would seem necessary when making schema changes to remove tables, columns and/or views from the database, but when schema changes are focused on adding and/or updating objects, and/or adding/updating data, is the entire process above necessary? 
    In this instance, if it's possible to remove the step of coordinating the entire topology to synchronize prior to performing these changes I would like to do that.
    The process as we currently perform it works without issue, but I'd like to streamline it if and where possible, while maintaining integrity and avoiding potential for non-convergence.
    Any assistance or insight you can provide is greatly appreciated.
    Best Regards
    Brad

    If you need to make schema changes then you will need to use ALTER syntax at the publisher.  By default the schema change will be propagated to subscribers automatically, publication property
    @replicate_ddl must be set to true.  This is covered in
    Make Schema Changes on Publication Databases.
    This can be done at anytime, without the need to synchronize unsynchronized changes, make a backup, or execute the snapshot agent.
    Adding an a new article involves adding the article to the publication, creating a new snapshot, and synchronizing the subscription to apply the schema and data for the newly added article. Reinitialization is not required, but a new snapshot is.
    Dropping an article from a publication involves dropping the articles, creating a new snapshot, and synchronizing subscriptions. Special considerations must be made for Merge publications with parameterized filters and compatibility level lower than 90RTM.
    This is covered in
    Add Articles to and Drop Articles from Existing Publications.
    Brandon Williams (blog |
    linkedin)

  • What is the best practice to export  schema from HTMLDB 2.0 to Bundle APEX

    Hello,
    1. I want to transfer data ( with trigger, sequence, procedures and functions ) from schema in HTMLDB 2.0 to APEX (with internal www server).
    2. Application export and import is good. I want to import data from this apps schema to Oracle XE in methodology export and import without creation by SQL scripts all objects (I need to achive this same sequence and other objects).
    3. Generaly, how to transfer apps and data from HTMLDB 2.0 with EE8MSWIN1250 to APEX 2.0 with DB XE AL32UTF8
    (all transfers between DB with the same NLS_CHARACTERSET are succesful without any problems)
    4. What I must to change with user rights and image properties in apps in new APEX
    2.1.0.00.39
    5. I the new verssion of the bundle APEX (with internal www server)
    Best regards, Remi

    Aaron:
    Check this site: Bonsai’s HDV to DVD page.
    Hope it helps !
      Alberto

  • Need Best Practice - Apex, multiple schemas, security model

    We have an oracle database which contains
    a) named database users with no objects
    b) several schemas with data tables:
    sales
    marketing
    accounting
    We need to build GUI for tables in these schemas,
    every database users should belong to a group, each user group should have access to several (not all) GUI pages.
    1) Is it possible and is it recommended (if not - why?) to create ONE workspace and ONE application inside it to have access to ALL tables in ALL schemas listed with user groups level security?
    How to do it properly?
    Some link to documentation?
    2) Which security model (apex users, database users,..) to choose and why? please recommend some links to comparison...

    Hi Marcus,
    Our developers like to see all the tables for a single custom application in its own diagram no matter where they come from and the DBA's don't want to wade through several thousand tables to find the handful we need nor have to duplicate table definitions in multiple models. In >Designer we have been doing that with Application Folders.There are no application folders in data Modeler. You can use subviews to define your subject areas. Subview is crated for each application (folder) during import form Designer repository.
    Philip

  • Question regarding best practice

    Hello Experts,
    What is the best way to deploy NWGW?
    We recently architected a solution to install the 7.4 ABAP stack which comes with Gateway. We chose the Central Gateway HUB scenario in a 3 -tier setup. Is this all that's required in order to connect this hub gateway to the business systems ie ECC? Or do we have to also install the gateway add-on on our business system in order to expose the development objects to the HUB? I'm very interested in understanding how others are doing this and what has been the best way according to your own experiences. I thought creating a trusted connection between the gateway hub and the business system would suffice to expose the development objects from the business system to the hub in order to create the gateway services in the hub out of them? Is this a correct assumption? Happy to receive any feedback, suggestion and thoughts.
    Kind regards,
    Kunal.

    Hi Kunal,
    My understanding is that in the HUB scenario you still need to install an addon in to the backend system (IW_BEP). If your backend system is already a 7.40 system then I believe that addon (or equivalent) should already be there.
    I highly recommend you take a look at SAP Gateway deployment options in a nutshell by Andre Fischer
    Hth,
    Simon

  • Question on best practice/optimization

    So I'm working with the Custom 4 dimension and I'm going to be reusing the highest member in the dimension under several alternate hierarchies. Is it better to drop the top member under each of the alternate hierarchies or create a single new member and copy the value from the top member to the new base one.
    Ex:
    TotC4
    --Financial
    -----EliminationA
    ------EliminationA1
    ------EliminationA2
    -----GL
    -------TrialBalance
    -------Adjustments
    --Alternate
    ----AlternateA
    -------Financial
    -------AdjustmentA
    -----AlternateB
    -------Financial
    -------AdjustmentB
    In total there will be about 8 Alternate Adjustments(it's for alternate trasnlations if you're curious).
    So should I repeate the entire Financial Hierarchy under each alternate rollup, or just write a rule saying FinancialCopy = Financial. It seems like it would be a trade off between performance and database size, but I'm not sure if this is even substantial enough to worry about.

    You are better off to have alternate hierarchies where you repeat the custom member in question (it would become a shared member). HFM is very fast at aggregating the rollups. This is more efficient than creating entirely new members which would use rules to copy the data from the original member.
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Question on best practice for NAT/PAT and client access to firewall IP

    Imagine that I have this scenario:
    Client(IP=192.168.1.1/24)--[CiscoL2 switch]--Router--CiscoL2Switch----F5 Firewall IP=10.10.10.1/24 (only one NIC, there is not outbound and inbound NIC configuration on this F5 firewall)
    One of my users is complaining about the following:
    When clients receive traffic from the F5 firewall (apparently the firewall is doing PAT not NAT, the client see IP address 10.10.10.1.
    Do you see this is a problem? Should I make another IP address range available and do NAT properly so that clients will not see the firewall IP address? I don't see this situation is a problem but please let me know if I am wrong.

    Hi,
    Static PAT is the same as static NAT, except it lets you specify the protocol (TCP or UDP) and port for the local and global addresses.
    This feature lets you identify the same global address across many different static statements, so long as the port is different for each statement (you CANNOT use the same global address for multiple static NAT statements).
    For example, if you want to provide a single address for global users to access FTP, HTTP, and SMTP, but these are all actually different servers on the local network, you can specify static PAT statements for each server that uses the same global IP address, but different ports
    And for PAT you cannot use the same pair of local and global address in multiple static statements between the same two interfaces.
    Regards
    Bjornarsb

  • Question on best practice....

    Friends,
    Final Cut Studio Pro 5/Soundtrack Pro 1.0.3
    Powerbook G4, 2GB Ram
    I have a DV session recorded over 6 hours that I need some assistance with. The audio for the session was recorded in two instances....via a conference "mic" plugged into a Marantz PDM-671 audio recorder onto compactflash (located in the front of the room by the presenter(s)) AND via the built-in mics on our Sony HDR-FX1 video camera. Needless to say, the audio recording on the DV tape is not very good (presenters' voice(s) are distant with lots of "noise" in the foreground), while the Marantz recording is also not great...but better.
    Since these two were not linked together or started recording at the same time, the amount/time of recording doesn't match. I'm looking for either of the following:
    (a) Ways to clean up or enhance the audio recording on the DV tape so that the "background" voices of the presenters are moved to the foreground and able to be amplified properly.
    OR
    (b) A software/resource that would allow me to easily match my separate audio recording from the Marantz to the DV tape video, so I could clean up the "better" of the two audio sources, but match the audio and video without having our speakers look like they're in a badly dubbed film.
    Any advice or assistance you could give would be great. Thanks.
    -Steve
    Steven Dunn
    Director of Information Technology
    Illinois State Bar Association
    Powerbook G4   Mac OS X (10.4.6)   2GB RAM

    Hello Steven,
    What I would do in your case since you have 6 hours is to edit the show with the audio off the DV camera. Then, as painfull as this will be, get the better audio from the recorder and sync it back up till it "phases" with the audio from the DV camera. One audio track will have the DV camera audio on it. Create another audio track and import the audio from the recorder and place it on the 2nd audio track. Find the exact "bite" or audio and match it to the start of the DV camera audio clip. Now slip/slid the recorder audio till the sound starts to "phase". This will take awile but in the end works when original camera audio is recorded from across the room. Good luck.

  • A question about Best Practices

    Im currently working on a project and have run into a bit of structure debate.
    Our project works with a retional database.
    Hence we have classes that model certain sections of the db.
    We wish to create a Data Access Object to interface the model classe to the db. To enforce consitency in programming we were thinking of using an DAOInterface object that would define all methods ( ie load() , save() ect... )
    This leads to one issue... because each model is different our interface would need to declare arguments and returns ast Object.
    Which means a lot of casting .... ugh.. ugly.
    however the solution to this problem is to create an interface for each DAOObject however this defeats the purpose... cause now any developer onthe team and sneak a method in without being standard across the board...
    I was hoping my fellow developers may be able to share their experiences with this problem and provide recomendations.
    thanks
    J.

    You can declare "marker" interfaces for your DO Classes to be included in the interface for the DAO Class.
    public interface DAOInterface {
        DOInterface create(DOPrimaryKeyInterface key) throws DAOException;
    public interface DOInterface {
    public interface DOPrimaryKeyInterface {
    }It still involves casting, but at least not from Object - and it does enforce the "contract."
    As to keeping other developers from screwing it up, that's called Team Management and is out of the purview of this forum. ;D

Maybe you are looking for

  • Help with calculating fractions in a program...

    I'm supposed to declare to fractions and then add, subtract, mulitply, and divide them. (3, 4) (2, 3) The outcome is supposed to look like this... Number 1 is 3/4 Number 2 is 2/3 Reciprocal of 3/4 is 4/3 Reciprocal of 2/3 is 3/2 3/4 + 2/3 is 17/12 3/

  • I need to understand how to get my master-detail html dataset to refresh.

    My SPRY master-detail dataset does not refresh.  Following is a description of what I have and what I am trying to do. The data source is an embedded table.  Each row of the table has two fields: a thumbnail (for the master section) and a full size i

  • Upgrading 4GB to 8GB iPhone 1.0

    Need Guidance.. I have 4GB iPhone 1.0 and I want to change it to 8GB (1.0). what step i need to do? 1. I switch it do i need to move the My SIM card first to the new one, and sync it on iTunes. 2. Use the new SIM card when I Sync to iTunes and type m

  • N97 doesnt boot after disconnection during sw upd...

    just vibrates when i press  power on  and nothing else... and doesn't connect to pc... SOS ...

  • 10.4.5 has killed my mouse !!!

    Hi there, I literally just upgraded yesterday from 10.3.9 to 10.4.5. Don't ask me why i was still using the old version, i just hadn't got round to the new one yet. Anyway, my mac has been using a bluetooth mouse (via USB Bluetooth dongle in mac) ver