Best Practice for Package Implementation of Data Manipulation

Hi,
Would like to ask which is better implementation for data manipulation (insert, update, delete) stored procedure for a single table.
To create a single procedure with input parameter for the action such as 1 for insert, 2 for update and so on
or
to create separate procedures for each like procedure pInsData for insert, pUpdData for update...

Hi,
Whenever you create a procedure it resides as a seperate object in database.
In my opinion its better to create a single procedure which takes care of all DML concern to a table, rather than creating different procedures for each DML.
If your number of DML are more and interrelated then its better to create a package and put all related DML procedures in the package concern to one transaction or table. This is because whenever you will call a package entire package will be placed in the memory for a particular session. So if you create different procedures for DML then you need to call the procedures each time you want it to be executed.
Twinkle

Similar Messages

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best Practices for converting SAP HR data (4.7 to ECC)

    Hello Experts ...
    We are going from 4.6 to ECC ... no upgrade ..it will be a new implementation ...
    I am looking for best practices to convert SAP HR data from one sap instance(4.6) to another(ECC) ...
    I am not sure if direct input or LSMW or any other method/tool is the best way ...
    Will really appreciate and award point if I can get good advise or documentation ...
    Let me know if my question is not clear enough.
    Thanks,

    Hi
    You can check SAP Marketplace and there follow the download link. You will require a SAP Marketplace login to download the Best Practices. Fortunately Best practices for ECC 5.00 and 6.00 are available there but they are country specific versions. I know HCM for US is available there.
    Reward points, if helpful.
    Regards
    Waz

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best practice for phone number column data type

    Hi,
    I hope I have posted this in the right forum, if not just notify me and i will remove it to the correct area.
    What would be considered best practice for storing a phone number.
    Number of Varchar2.
    Ben

    Well I was thinking that Number would have the following disadvantages,
    1. Users entering phone numbers into a form may be tempted to pre format with spaces, thereby throwing up an error.
    2. Mobile phone numbers may have leading zeros
    3. Calculations are not carried out on phone numbers.
    I was leaning towards a varchar2 type.
    Ben

  • What is the best practice for package source locations?

    I have several remote servers (about 16) that are being utilized as file servers that have many binaries on them to be used by users and remote site admins for content. Can I have SCCM just use these pre-existing locations as package sources, or is this
    not considered best practice? 
    Or
    Should I create just one package source within close proximity to the Site Server, or on the Site Server itself?
    Thanks

    The primary site server is responsible for grabbing the source data and turning it into packages for Distribution points.  so while you can use ANY UNC to be a source location for content, you should be aware of where that content exists in regards
    to your primary site server.  If your source content is in Montana but your primary server is in California ... there's going to be a WAN hit ... even if the DP it's destined for is also in Montana.
    Second, I strongly recommend locking down your source UNC path so that only the servers and SCCM admins can access it.  This will prevent side-loading of content  as well as any "accidental changing" of folder structure that could cause
    your applications/packages to go crazy.
    Put the two together and I typically recommend you create a DSL (distributed source library) share and slowly migrate all your content into it as you create your packages/applications.  You can then safely create batch installers, manage content versions,
    and other things without fear of someone running something out of context.

  • Best practice for having separate clone data for development purposes?

    Hi
    I am on a hosted Apex environment
    I have a workspace containing two instances/ copies of the application: DEV and PROD
    I would like to be able to develop functionality and data in/ with the DEV instance and then insert it into DEV.
    I gather that I can insert pages from DEV to PROD via Create -> New page as copy -> Page in another application
    But I don't know how I can mimic this process with database objects, eg. if I want to create a new table or manipulate the data in an existing table in a DEV environment before implementing in a PROD environment.
    Ideally this would be done in such a way that minimises changing table names etc when elevating pages from DEV to PROD.
    Would it be possible to create a clone schema that could contain the same tables (with the same names) as PROD?
    Any tips, best practices appreciated :)
    Thanks

    Hi,
    ideally you should have a little more separation between your dev and prod environments. At the minimum you should have separate workspaces each addressing separate schemas. Apex can be a little difficult if you want to move individual Apex application objects, such as pages, between applications (a much requested improvement), but this can be overcome by exporting and importing the whole application. You should also have some form of version control/backup of export files.
    As far as database objects go, tables etc, if you have tns access to your hosted environment, then you can use SQL Developer to develop, maintain and synchronize between your development and production schemas and objects in the different environments should have identical names. If you don't have that access, then you can use the Apex SQL Workshop features, but these are a little more cumbersome than a tool like SQL Developer. Once again, scripts for creating and upgrading your database schemas should be kept under some sort of version control.
    All of this is supposing your hosting solution allows more than one workspace and schema, if not you may have to incur the cost of a second environment. One other option would be to do your development locally in an instance of Oracle XE, ensuring you don't have any version conflicts between the different database object features and the Apex version.
    I hope this helps.
    Regards
    Andre

  • Best Practice for Apex Implementation

    Hello,
    I'm looking for some guidance in best practices on implementing Apex across our enterprise. Do we install it on many databases based on whether an application gets most of its data from that database? And if so, could we use one 10gAS web server to serve up all of the instances?
    We currently have Apex installed on RAC databases in each environment (Dev, Int, QA, Prod), and then use dblinks to connect to the many remaining databases. Each RAC environment then uses an appropriate 10gAS web server (one web server per Apex installation). I'm wondering if this is a good approach or not? Any suggestions are appreciated.
    Julie

    Hi,
    Some of the standards will depend on your end-user capabilities/environments but I have a good standards document we use when developing UPK content for the eBus suite which might help.
    Email me at [email protected] and I'll send it over.
    Jon

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best Practices for Packaging and Deploying Server-Specific Configurations

    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks can
    configure each server's properties at the server. But it appears that an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please let me
    know how.
    Or do we have to build a unique ear for each server? This is possible, of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

    The one draw back to this is you have to go all the way back to ant and the
    build system to make changes. You really want these env variables to be
    late binding.
    cheers
    mbg
    "Sai S Prasad" <[email protected]> wrote in message
    news:[email protected]...
    >
    Paul,
    I have a similar situation in our project and I don't create ear filesspecific
    to the environment. I do the following:
    1) Create .properties file for every environment with the same attributename
    but different values in it. For example, I have phoneix.properties.NT,phoenix.properties.DEV,
    phoenix.properties.QA, phoenix.properties.PROD.
    2) Use Ant to compile, package and deploy the ear file
    I have a .bat file in NT and .sh for Solaris that in turn calls theant.bat or
    ant.sh respectively. For the wrapper batch file or shell script, you canpass
    the name of the environment. The wrapper batch file will copy theappropriate
    properties file to "phonenix.properties". In the ant build.xml, I alwaysrefer
    to phonenix.properties which is available all the time depending on theenvironment.
    >
    It works great and I can't think of any other flexible way. Hope thathelps.
    >
    Paul Hodgetts <[email protected]> wrote:
    We have some server-specific properties that vary for each server. We'd
    like to have these properties collected together in their own properties
    file (either .properties or .xml is fine).
    What is the best-practices way to package and deploy an application (as
    an
    ear file), where each server needs some specific properties?
    We'd kind of like to have the server-specific properties file be stored
    external to the ear on the server itself, so that the production folks
    can
    configure each server's properties at the server. But it appears that
    an
    application can't access a file external to the ear, or at least we can't
    figure out the magic to do it. If there is a way to do this, please
    let me
    know how.
    Or do we have to build a unique ear for each server? This is possible,
    of
    course, but we'd prefer to build one deployment package (ear), and then
    ship that off to each server that is already configured for its specific
    environment. We have some audit requirements where we need to ensure
    that
    an ear that has been tested by QA is the very same ear that has been
    deployed, but if we have to build one for each server, this is not
    possible.
    Any help or pointers would be most appreciated. If this is an old issue,
    my apologies, would you please point me to any previous material to read?
    I didn't see anything after searching through this group's archives.
    Thanks much in advance,
    Paul
    Paul Hodgetts -- Principal Consultant
    Agile Logic -- www.agilelogic.com
    Consulting, Coaching, Training -- On-Site & Out-Sourced Development
    Java, J2EE, C++, OOA/D -- Agile Methods/XP/Scrum, Use Cases, UI/IA

  • Best practices for administering Oracle Big Data Appliance

    -        Best practices as part of administration of Oracle Big Data Infrastructure
    -        How do we lock down max space usage per project
    Eg: Project team A can have a max limit of 10 TB space allocated
    -        Restricting roles, access ( Read, Write), place holder for common shared artifacts
    -        Template/procedure for code migration across dev,qa and prod environments etc

    Your data is bigger than I run, but what I have done in the past is to restrict their accounts to a separate datafile and limit its size to the max that I want for them to use: create objects restricted to accommodate the location.

  • Best practices for Indirection implementation?

    Hi, I'm about to start trying out several aspects of indirection in Toplink. My question is; what are best practices to implement this feature?
    To me, it looks like proxy indirection is the cleanest way to do this, I'm not sure however whether there are any restrictions to it - besides the necessity for an interface for each domain class. My goal is to keep the amount of Toplink code 'clutter' in my domain model as low as possible.
    Thanks.

    Although proxy indirection is nice for 1:1 as it reduces the 'clutter' there may be a performance hit. It really depends upon your JDK.
    Personally I prefer to use ValueHolder for 1:1 and transparent collection indirection for my collection mappings. The attribute that you make a ValueHolder is private and the API of your class does not need to expose its existence in any way. I find this best as I do not need to manage an additional interface and keep its API in sync.
    In the very near future you will be able to use our EJB 3.0 implementation that leverage AOP style weaving to dynamically enhance your mapped classes during loading. This will allow you to have 1:1 indirection without the interface or ValueHolder. Since changing the type of the attribute and the implementation of the get/set method is probably less intrusive then factoring our an unecessary interface to your future migration I would stick with the ValueHolder for now.
    Doug

Maybe you are looking for

  • Asset Transfer Report with matching new asset number

    Hi, During the past week, our company transferred over 2000 assets from several companies to one company. As a result of the transfer, the tax depreciation keys and tax depreciation life weren't carried over to the new assets. I am wondering if anyon

  • Poor Quality Render.

    I am having trouble figuring this out. My composition has been made in after effects cs6 I only imported an image. I am projecting the image to create a 3d feel. In the preview the image looks great. The issue is when render the composition the quali

  • Moving DIT to a new master during migration

    Hello, I am in the process of migrating my schema v1 database to schema v2. In the course of this migration, I would like to make a new system the master server. The existing server hosts both the user directory and the configuration directory. I was

  • Why can't I create mail boxes?

    I am unable to create additional mail boxes in the Mail program.

  • Configure mail with Outlook Account

    Is there a way to configure an Outlook Account (hotmail, live) with mail for Mac ?