Best practices - ANSI-92 syntax

We're going to allow developers to stray away from the Oracle proprietary syntax, and start using ANSI-92 compliant syntax. Are there any best practices out there that I can use to standardize our code? Or make it run more efficiently?
I've seen generated ANSI-92 SQL with tons of parens in the past, does that add value if you collect all of your INNER JOINs together within parens, and then OUTER JOIN outside of those parens?
The only standards I could think of, mostly for readability:
1) no RIGHT OUTER JOINs (only LEFT)
2) move all criteria for table involved in INNER JOINs to the WHERE clause
--=Chuck

One more thought...
I have worked on standards several times. What makes them successful is if the people who must code to them also feel like they own them. I would suggest that you make a draft or straw-man set of standards, then enlist key (or all) developers to review them page-by-page and make sure their suggestions are understood and included -- make sure you include reasons for the standars. Sometimes it helps to divide the standards into two types, rules and best practices. Rules being firm and best practices being flexible. It even helps if one of the review team becomes the editor and you (I presume manager or DBA) only acts as adviser and facilitator.

Similar Messages

  • Join syntax best practice

    What is best practice for doing joins? Using the words INNER JOIN etc or using = and =(+) to determine right or left.
    SELECT a.ename
         , b.deptno
      FROM emp a INNER JOIN dept b
        ON a.deptno = b.deptno
    SELECT a.ename
         , b.deptno
      FROM emp a
         , dept b
    WHERE a.deptno = b.deptnoThank You
    Ben

    I used to hate ANSI syntax, but I came to like it and now find it more readable.
    Another thing ..
    Oracle syntax does not support outer joining to more than one table.
    However ANSI syntax does...
    SQL> select * from a;
            ID      B_KEY      C_KEY
             1          2          3
             2          1          4
             3          3          1
             4          4          2
    SQL> select * from b;
            ID     C_KEY2
             1          1
             2          5
             3          3
             4          2
    SQL> select * from c;
          KEY1       KEY2 DTA
             1          1 1-1
             1          2 1-2
             1          3 1-3
             1          4 1-4
             2          1 2-1
             2          2 2-2
             2          3 2-3
             2          4 2-4
             3          1 3-1
             3          2 3-2
             3          3 3-3
             3          4 3-4
             4          1 4-1
             4          2 4-2
             4          3 4-3
             4          4 4-4
    16 rows selected.
    SQL> ed
    Wrote file afiedt.buf
      1  select a.id as a_id, b.id as b_id, c.key1 as c_key1, c.key2 as c_key3, c.dta
      2  from a, b, c
      3  where a.b_key = b.id
      4  and   a.c_key = c.key1 (+)
      5* and   b.c_key2 = c.key2 (+)
    SQL> /
    and   a.c_key = c.key1 (+)
    ERROR at line 4:
    ORA-01417: a table may be outer joined to at most one other table
    SQL> ed
    Wrote file afiedt.buf
      1  select a.id as a_id, b.id as b_id, c.key1 as c_key1, c.key2 as c_key3, c.dta
      2  from a JOIN b ON (a.b_key = b.id)
      3*        LEFT OUTER JOIN c ON (a.c_key = c.key1 and b.c_key2 = c.key2)
    SQL> /
          A_ID       B_ID     C_KEY1     C_KEY3 DTA
             3          3          1          3 1-3
             4          4          2          2 2-2
             2          1          4          1 4-1
             1          2
    SQL>

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Best practices for setting up projects

    We recently adopted using Captivate for our WBT modules.
    As a former Flash and Director user, I can say it’s
    fast and does some great things. Doesn’t play so nice with
    others on different occasions, but I’m learning. This forum
    has been a great source for search and read on specific topics.
    I’m trying to understand best practices for using this
    product. We’ve had some problems with file size and
    incorporating audio and video into our projects. Fortunately, the
    forum has helped a lot with that. What I haven’t found a lot
    of information on is good or better ways to set up individual
    files, use multiple files and publish projects. We’ve decided
    to go the route of putting standalones on our Intranet. My gut says
    yuck, but for our situation I have yet to find a better way.
    My question for discussion, then is: what are some best
    practices for setting up individual files, using multiple files and
    publishing projects? Any references or input on this would be
    appreciated.

    Hi,
    Here are some of my suggestions:
    1) Set up a style guide for all your standard slides. Eg.
    Title slide, Index slide, chapter slide, end slide, screen capture,
    non-screen capture, quizzes etc. This makes life a lot easier.
    2) Create your own buttons and captions. The standard ones
    are pretty ordinary, and it's hard to get a slick looking style
    happening with the standard captions. They are pretty easy to
    create (search for add print button to learn how to create
    buttons). There should instructions on how to customise captions
    somewhere on this forum. Customising means that you can also use
    words, symbols, colours unique to your organisation.
    3) Google elearning providers. Most use captivate and will
    allow you to open samples or temporarily view selected modules.
    This will give you great insight on what not to do and some good
    ideas on what works well.
    4) Timings: Using the above research, I got others to
    complete the sample modules to get a feel for timings. The results
    were clear, 10 mins good, 15 mins okay, 20 mins kind of okay, 30
    mins bad, bad, bad. It's truly better to have a learner complete
    2-3 short modules in 30 mins than one big monster. The other
    benefit is that shorter files equal smaller size.
    5) Narration: It's best to narrate each slide individually
    (particularly for screen capture slides). You are more likely to
    get it right on the first take, it's easier to edit and you don't
    have to re-record the whole thing if you need to update it in
    future. To get a slicker effect, use at least two voices: one male,
    one female and use slightly different accents.
    6) Screen capture slides: If you are recording filling out
    long window based databse pages where the compulsory fields are
    marked (eg. with a red asterisk) - you don't need to show how to
    fill out every field. It's much easier for the learner (and you) to
    show how to fill out the first few fields, then fade the screen
    capture out, fade the end of the form in with the instructions on
    what to do next. This will reduce your file size. In one of my
    forms, this meant the removal of about 18 slides!
    7) Auto captions: they are verbose (eg. 'Click on Print
    Button' instead of 'Click Print'; 'Select the Print Preview item'
    instead of 'Select Print Preview'). You have to edit them.
    8) PC training syntax: Buttons and hyperlinks should normally
    be 'click'; selections from drop down boxes or file lists are
    normally 'select': Captivate sometimes mixes them up. Instructions
    should always be written in the correct order: eg. Good: Click
    'File', Select 'Print Preview'; Bad: Select 'Print Preview' from
    the 'File Menu'. Button names, hyperlinks, selections are normally
    written in bold
    9) Instruction syntax: should always be written in an active
    voice: eg. 'Click Options to open the printer menu' instead of
    'When the Options button is clicked on, the printer menu will open'
    10) Break all modules into chapters. Frame each chapter with
    a chapter slide. It's also a good idea to show the Index page
    before each chapter slide with a progress indicator (I use an
    animated arrow to flash next to the name of the next chapter), I
    use a start button rather a 'next' button for the start of each
    chapter. You should always have a module overview with the purpose
    of the course and a summary slide which states what was covered and
    they have complete the module.
    11) Put a transparent click button somewhere on each slide.
    Set the properties of the click box to take the learner back to the
    start of the current chapter by pressing F2. This allows them to
    jump back to the start of their chapter at any time. You can also
    do a similar thing on the index pages which jumps them to another
    chapter.
    12) Recording video capture: best to do it at normal speed
    and be concious of where your mouse is. Minimise your clicks. Most
    people (until they start working with captivate) are sloppy with
    their mouse and you end up with lots of unnecessarily slides that
    you have to delete out. The speed will default to how you recorded
    it and this will reduce the amount of time you spend on changing
    timings.
    13) Captions: My rule of thumb is minimum of 4 seconds - and
    longer depending on the amount of words. Eg. Click 'Print Preview'
    is 4 seconds, a paragraph is longer. If you creating knowledge
    based modules, make the timing long (eg. 2-3 minutes) and put in a
    next button so that the learner can click when they are ready.
    Also, narration means the slides will normally be slightly longer.
    14) Be creative: Capitvate is desk bound. There are some
    learners that just don't respond no matter how interactive
    Captivate can be. Incorporate non-captivate and desk free
    activities. Eg. As part of our OHS module, there is an activity
    where the learner has to print off the floor plan, and then wander
    around the floor marking on th emap key items such as: fire exits;
    first aid kit, broom and mop cupboard, stationary cupboard, etc.
    Good luck!

  • NLS data conversion – best practice

    Hello,
    I have several tables originate from a database with a single byte character set. I want to load the data into a database with multi-byte character set like UTF-8, and in the future, be able to use the Unicode version of Oracle XE.
    When I'm using DDL scripts to create the tables on the new database, and after that trying to load the data, I receive a lot of error messages regarding the size of the VARCHAR2 fields (which, of course, makes sense).
    As I understand, I can solve the problem by doubling the size of the verachar2 fields: VARCHAR2(20) will become VARCHAR2(40) and so on. Another option is to use the NVARCHAR2 datatype, and retain the correlation with the number of characters in the field.
    I never used NVARCHAR2 before, so I don't know if there are any side affects on the pre-built APEX processes like Automatic DML, Automatic Row Fetch and the likes, or on the APEX import data mechanism.
    What will be the best practice solution for APEX?
    I'll appreciate any comments on the subjects,
    Arie.

    Hello,
    Thanks Maxim and Patrick for your replies.
    I started to answer Maxim when Patrick post came in. It's interesting as I tried to change this nls_length_semantics parameter once before, but without any success. I even wrote an APEX procedure to run over all my VARCHAR2 columns, and change them to something like VARCHAR2(20 char). However, I wasn't satisfied with this solution, partially because what Patrick said about developers forgetting the full syntax, and partially because I read that some of the internal procedures (mainly with LOBs) do not support this character mode and always working with byte mode.
    Changing the nls_length_semantics parameter seems like a very good solution, mainly because, as Patrick wrote, " The big advantage is that you don't have to change any scripts or PL/SQL code."
    I'm just curious, what is the technique APEX is using to run on all various, SB and MB character sets?
    Thanks,
    Arie.

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Best practices about JTables.

    Hi,
    I'm programming in Java since 5 months ago. Now I'm developing an application that uses tables to present information from a database. This is my first time handling tables in Java. I've read Sun's Swing tutorial about JTable, and several information on other websites, but they limit to table's syntax and not in best practices.
    So I decided what I think is a proper way to handle data from a table, but I'm not sure that is the best way.Let me tell you the general steps I'm going through:
    1) I query employee data from Java DB (using EclipseLink JPA), and load it in an ArrayList.
    2) I use this list to create the JTable, prior transformation to an Object[][] and feeding this into a custom TableModel.
    3) From now on, if I need to search an object on the table, I search it on the list and then with the resulting index, I get it from the table. This is possible because I keep the same row order on the table and on the list.
    4) If I need to insert an item on the table, I do it also on the list, and so forth if I'd need to remove or modify an element.
    Is the technique I'm using a best practice? I'm not sure that having to keep synchronized the table with the list is the better way to handle this, but I don't know how I'd deal just with the table, for instance to efficiently search an item or to sort the table, without doing that first on a list.
    Are there any best practices in dealing with tables?
    Thank you!
    Francisco.

    Hi Joachim,
    What I'm doing now is extending DefaultTableModel instead of implementing AbstractTableModel. This is to save implementing methods I don't need and because I inherit methods like addRow from DefaultTableModel. Let me paste the private class:
    protected class MyTableModel extends DefaultTableModel {
            private Object[][] datos;
            public MyTableModel(Object[][] datos, Object[] nombreColumnas) {
                super(datos, nombreColumnas);
                this.datos = datos;
            @Override
            public boolean isCellEditable(int fila, int columna) {
                return false;
            @Override
            public Class getColumnClass(int col) {
                return getValueAt(0, col).getClass();
        }What you are suggesting me, if I well understood, is to register MyTableModel as a ListSelectionListener, so changes on the List will be observed by the table? In that case, if I add, change or remove an element from the list, I could add, change or remove that element from the table.
    Another question: is it possible to only use the list to create the table, but then managing everything just with the table, without using a list?
    Thanks.
    Francisco.

  • Best practice using keyword in subject for encryption

    I have setup an encryption content filter on my appiance to encrypt messages that are outbound, and have a subject header that begin with the keyword Secure inside brackets. [Secure]. My intent was to eliminate some fals positives by including the brackets. What ended up happening was for some reason ALL outbound messages were being encrypted.
    After some more testing it seems like the content filter is ignoring the brackets as a requirement, but I can seem to find anything in the online help to back this up.
    Can someone assist with verification of the requirements around using a keywork in the subject to allow users to encrypt oubound messages voluntarily?
    Best practices around achieving this if anyone has them would also be greatly appreciated.
    Thanks,
    Chris

    Hi Chris,
    While I would have to see the syntax of the filter in question to be 100% sure , it sounds as if your conditional variable is being treated literally. The content filters will except regular expressions so what this means is something like [Secure] could be seen as look for anything that contains
    an S, or a
    e or a
    c or a
    u or a
    r or a
    e
    Since you said begins with we would be looking for anything in the subject that begins with of of these letters. This is due to the fact that the brackets [ ] are special characters in regular expressions, thus they need to be escaped. That being said [Secure] would become \\[Secure\\], we use the \\ to escape the brackets. In content filters however, you can get by with using just one escape as the filter is smart enough to go ahead an enter the additional escape for you. So in the filter it would be something like begins with \[Secure\]
    Once you enter this in the condition for you filter it will display like the following,
    subject == "^\\[Secure\\]"
    I hope that helps.
    Christopher C Smith
    CSE
    Cisco IronPort Customer Support  

  • Best practice PDW database backup strategy/plan

    Hello All,
    We are ready with PDW infra , appliance is almost ready. we are planning for implementation.
    but before that , I have to document best practice for PDW database backup strategy/plan.
    since PDW environment is pretty new . Please help me with best backup strategy/plan which can
    be followed to implement in my proposed PDW solution.
    your suggestions will be highly appreciated.
    Regards,
    Anish.S
    Asandeen

    Hi Anish.S,
    According to your description, you want to backup SQL Server Parallel Data Warehouse (PDW) database.
    Before we get to the backup and restore syntax, it’s worth noting that the Parallel Data Warehouse (PDW) appliance architecture offers an environment that greatly enhances backup times (due to dedicated storage and network interfaces, see the following post
    for more information -
    https://saldeloera.wordpress.com/2012/07/09/lesson-1-of-parallel-data-warehouse-basic-architecture-overview/).
    For more details how to backup and restore database on PDW, please refer to the following blog:
    http://www.sqlservercentral.com/blogs/useful-information-and-case-studies-covering-data-warehousing-data-modeling-and-business-intelligence/2012/10/04/parallel-data-warehouse-pdw-how-to-using-backup-and-restore-database-on-pdw/
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

  • Best practice on drivers update

    How to update driver?
    There are so many different types mode of computers. update drives with inf or exe?

    You can do it with two ways:
    Application model
    Package / Program
    I don't know is there really a 'best practice' though. The driver updating itself is done with this neat little utility called devcon.exe (more info on the syntax it uses here:
    http://msdn.microsoft.com/en-us/library/windows/hardware/ff544746%28v=vs.85%29.aspx). For the correct version of the utility, you need to download Windows Driver Kit for the correct version of Windows you will be deploying the updated drivers to (http://msdn.microsoft.com/en-US/windows/hardware/gg454513).
    Devcon is used for the .inf file updates, for the .exe packaged drivers you simply run the .exe silently. Jörgen has done an excellent sample on this with the application model:
    http://ccmexec.com/2013/10/update-a-device-driver-configuration-manager-2012/

  • Looking up an EJB – best practices

    Hi, what's the correct way to specify an EJB when attempting to get
    the home interface?
    Using WebLogic, it seems people just specify the JNDI name and it
    seems to work fine.
    Object ref = context.lookup("testsession");
    But other times I see this syntax:
    Object ref = context.lookup("java:comp/env/ejb/MySession2");
    Which method is best practices?
    Thanks

    Hello Marcus,
    Actually, EJB 1.1 introduced a formal manner to specify the location of the EJBs,
    namely the
    "java:comp/env/ejb" location. This is actually a best practice for specifying
    the location of
    your EJBs, because the "application assembler" doesn't need to know exactly what
    the JNDI
    name of the particular EJB is. This task is left to the actual deployer. If you
    use the actual JNDI
    name of the EJB instead of Sun's recommended prefix, then you are limiting the
    overall portability
    of the EJBs, because the code must be modified to reflect the exact JNDI name
    that will be used
    based on the particular J2EE application server that they are being deployed on.
    You can read up about this in chapter 14 of the EJB 1.1 specification as well
    as chapter 5 of the
    J2EE 1.2 specification. Also, check out Mastering Enterprise JavaBeans 2nd Edition
    by
    Ed Roman, Scott W. Ambler, and Tyler Jewell for more explanation and code examples.
    Best regards,
    Ryan LeCompte
    [email protected]
    http://www.louisiana.edu/~rml7669
    [email protected] (Marcus Leon) wrote:
    Hi, what's the correct way to specify an EJB when attempting to get
    the home interface?
    Using WebLogic, it seems people just specify the JNDI name and it
    seems to work fine.
    Object ref = context.lookup("testsession");
    But other times I see this syntax:
    Object ref = context.lookup("java:comp/env/ejb/MySession2");
    Which method is best practices?
    Thanks

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

Maybe you are looking for

  • The new upgrade for the iphone 4 wont let me open my apps

    i have the iphone 4. i just got the new update 2 nights ago and some of my apps that i perchased before i got the update will not open. and also my music wont download. i paid for these apps and music and its not working?

  • Video Out not working on 2g

    Hey I am using an iPod touch 2g with 2.2.1 firmware. I just bought a video cable (i-tec) it says "made for iPod" on it, but when I try to use it watch movies on my tv it does not work.

  • Split mapping created no message-how to avoid red flag message in SXMB_MONI

    Hi, In my scenario I'm putting files in an external server with multi mapping and a condition in graphic mapping in MM, when condition is not true and no files are generated I see a red flag message in XIP MONI, "split mapping created no message". I

  • Occi::environment createConnection throws exception

    Hi there, I tried connecting to my Oracle 11g database in debug mode but was unsuccessful. I am developing an ActiveX plugin using Visual Studio 2008. I downloaded OCCI 11.1.0.6.0 and instant client 11.2.0.1.0, extracted the the lib and dll files to

  • Multiple line items one with default values

    Hi All We have a flat file coming into xi that needs to be mapped to an idoc, the idoc will have multiple line items. The first line item must have default values but the from the second line item and on must have dynamic value from file? How Can thi