Comparison Mechanism of AMM

Hello !
Please explain
what is the key difference between processes/mechanisms the GROWING/SHRINKING of pools (sga_target<>0)
and clearing of granules (set sga_max_size and sga_target=0) ?
Thanks and regards,
Pavel
Edited by: Pavel on May 10, 2012 11:08 PM
Edited by: Pavel on May 10, 2012 11:08 PM

Growing and shrinking of pool is based on requirement.
in 10g we've SGA_target and SGA resize operation detail is available in below view.
V$SGA_RESIZE_OPS displays information about the last 400 completed SGA resize operations. This does not include in-progress operations. All sizes are expressed in bytes.
in 11g we've memory target parameter for SGA and PGA.
V$MEMORY_CURRENT_RESIZE_OPS displays information about memory resize operations (both automatic and manual) which are currently in progress. An operation can be a grow or a shrink of a dynamic memory component.

Similar Messages

  • Detailed comments

    I've gone through the EAD spec. and I've come up with a detailed list of comments that I'd like to post here and also will post over at TheServerSide.com and send directly to the JSR group. This area will give the community a chance to discuss this more. There are a few things that I did not finish commenting on, but over-all this document is fairly complete with my comments thus far. But then again, I've only read the spec twice and spent a day considering it.
    I'll continue to add to this thread as I think of more.
    Brian Pontarelli
    Included document
    JSF 2.1
    Assuming that most applications will be setup like:
    HTML->HTTP->FacesServlet->reconstitute phase->validation phase->model phase
    This leads to an enormous amount of duplication as well as overhead. The information for each form component will be stored in the HttpServletRequest, expanded into the UIComponent Tree and stored in the FacesContext and finally migrated to the Application�s Model. Although this seems to be a small amount of work when considering smaller forms with two or three fields, it could become larger with 20-30 field forms. This continues to grow when considering an intensive web application with many users (i.e. 20+ requests per minute). In addition, the cost of the UIComponent and Application�s Model classes themselves might further increase the amount of memory consumed by this triple duplication as they may contain other member variables that increase each instances foot-print. In addition, without some comparison mechanism, the HttpSerlvetRequest parameters will be copied to the UIComponents local values, which will be copied to the Application�s Model each request. If, for example, the request goes all the way to the Invoke Application phase and then encounters an error, which will redirect the user back to the form so that they can fix some values, after they have fixed the values and resubmitted, the triple transfer happens, in its entirety, again (unless the application program is exceptionally savvy and is willing to build in an update recognition component that could skip application model updates when they are not needed).
    The simpler and more concise design seems to be a single duplication of data. This would be from the HttpServletRequest to the Application�s Model. This would remove the UIComponent�s local values entirely. The UIComponent tree could still be constructed and optimized however the JSF spec allows. Likewise, the UIComponent�s themselves could be �backed� by the Application�s Model classes as is the case in the MVC design of Java�s Swing APIs.
    The decoding process would work the same but would store the decoded information in the Application�s Model. Likewise, the encoding would retrieve it from the Application�s Model. This mimics other frameworks such as Jakarta Struts with their ActionForm classes that are essentially the Application�s Model (or at least positioned in such a way that they could be).
    JSF 2.3
    This has been tried many times and shown to be lacking. Server-side event models do not scale well because of the overhead of marshalling and unmarshalling the entire HttpServletRequest including all the form parameters, so that a single checkbox can change the values in a single selectbox (for example). The only solution to this problem seems to be the use of contained transmission systems, which transmit only the needed components state to the server. The server can respond with updates to any component, or whatever it needs. In order to attempt to accomplish this in a web browser, some very extensive JavaScript needs to be written which can cause enormous amounts of support issues. I think that you�ll find very little need for RequestEventHandlers and find that nearly 98%+ of the work will be done in the ApplicationEventHandlers.
    JSF 2.6
    This needs to be rewritten. This contains information about the Lifecycle management process before the reader knows what that is.
    JSF 2.7
    I don�t really like the concept of 1 Tree to 1 page yet, but I don�t know why. Need to think about this and draw some concrete conclusions about how this is lacking and what impacts it will have.
    How will applications be able to forward to HTML pages? It doesn�t seem possible in the current setup without creating Tree objects for pages that don�t contain JSF code. Likewise, it seems that the requirement of having response Trees dictate the outbound page require that every JSP page in the entire application use JSF code (in order to seem conceptually correct). This seems like a large requirement for businesses with existing info-structure. Not to mention the need to be redirected out of the J2EE application server to an ASP server. Of course no one wants that, but it is a reality. This seems very restricting. The flow should be flexible enough to support forwards and redirects to any resource inside and outside the container.
    JSF 2.8
    The requirement on forcing the Tree to be saved to the response or the session seems very restricting. This section is very ambiguous about what writing the Tree to the response means. Does this mean doing nothing because the JSF tags will do everything for you? Or does it mean adding additional information to the HTML about the state of the JSF system? In the latter case, this is simply duplication of the information that the JSF tags write out, is it not? And there might be implementations with large Trees and many users that do not want to bug down the session with this information and would rather spend the computing cycles to reconstruct it each time from the request. Additionally, would there be cases where a developer would want to send the information from a normal HTML page to the JSF system and have it construct a UIComponent Tree? This seems likely and not possible (?) with the requirement from this section.
    If you decide to leave in the local values and model values that I disagreed with above, you�ll need to be specific about where the values for the response come from when encode is called. It they come from the local values of the UIComponent, then the application logic will need to be responsible for migrating the values from the Application�s Model to the UIComponent�s local values. If they come from the Application�s Model, then every component will need to supply model references (I think). Or a better solution to this problem would be to add another phase to the lifecycle called �Update Local Values� which is designed to update the UIComponent�s local values from the Application�s Model if necessary. Or you could simply do away with the UIComponent�s local values altogether in favor of a more MVC oriented system where the view is directly backed by the Application�s Model (similar to Swing).
    JSF 3.1.2
    You probably want to add a way to determine a components individual and absolute id (bar and /foo/bar). This will be useful in tools as well as debugging.
    JSF 3.1.5-3.1.6
    See above about my issues with model references and local values. What if I write a JavaScript Tree component? This would mean that UIComponent�s local value would be of type com.foo.util.Btree (or something) and my Application�s Model might be the same. There is a lot of overhead doing things this way. What if my tree stores the groups and all the employees for a company with 50,000 people and 500 groups (not the best way to do things, but possible)? What if the Tree is roughly 1K in size (Java object size) and 2000 users are banging away at the system all day? Let�s see that�s 1K for the UIComponent�s local value, 1K for the Application�s Model, 2000 users, and roughly 4 Megs consistent memory usage for a single component.
    JSF 3.5.x
    This was a major concern to me when I wrote both of my frameworks. A reusable Validator is excellent because it reduces the amount of code duplication. However, it is very difficult to tailor messages for specific UIComponents using a reusable Vaildator. For example, on one page I just a text box for age and on another I use if for income. I don�t want my error messages to be generic stating that, �This value must be greater than 0 and less than X�. I want the user to know what must be within the range.
    One solution is to use the name of the input in the error message. This forces the user to name inputs in human readable form, which might not be possible. For example, I have an input for monthly overhead and I name it monthlyOverhead so that it is a legal variable name. You can�t have a message that reads, �monthlyOverhead must be greater than 0�. This just won�t fly in a production environment. It needs to be nice and human readable and say, �Your monthly overhead must be greater than zero.� However, you can�t name your UIComponent �Your monthly overhead� especially I you intend to do JavaScript on the page. Besides, it�s just bad style.
    Another solution is requiring specific sub-classes for each message required, or some parameter from the page to denote the specific message to use. The former clutters up the packages with tons of Validators and also requires way too much coding. The latter completely negates the ability to use parameterized messages without further bogging down the page with all the (un-localized) parameters to the error message or forces the placing of all the parameters inside the resource bundle for the error messages with a standard naming scheme (i.e. for the first parameter to the message �longRange.montlyOverhead.0=Your monthly overhead�). Since 1/3 of any application is really the view and interaction of which a large chunk is error messages, this is a major issue that must be considered. Because it always happens that the CEO plays around with the application one day and says, �I really wish this error message read this �� and then you�re in for some major headaches, unless this problem is solved up front.
    JSF 3.5.2.x and 4.x and 7.6.x
    These sections seem to break up the flow of reading. The previous sections were charging forward with information about the interfaces, the JSF classes and specifics about what is required for each Phase. Then we need to down shift quite a bit to talk about default/standard implementations that ship with JSF or are required to be implemented by implementers. I think that these should be contained in a later section after 5, 6, 7 and 8.
    JSF 5.1.2
    What are the implications of this decision on Internationalization? When different UIComponents encode using different Locales and the HttpServletResponse�s content type has already been set, there could be rendering problems on the client side in the browser.
    JSF 5.1.5
    Messages added to the message queue during validation or processing contain Unicode String Objects and could be written in any language. The Message Object does not contain information about the Locale that the message needs to be converted to and this is needed for internationalization. If I have a multi-lingual portal and output error messages in multiple languages, the spec needs to really consider what and where the charset for the HTTP header is going to be set. What if JSF realizes it needs to use UTF-8 but another tag library an application is using assumes fr_FR, who is correct and what will happen? How will JSF determine what encoding to use when it has Messages in ten different languages? What if the container starts writing the output to the stream before the header is set? Etc. etc.
    JSF 8.1
    This is possibly the most confusing and poorly written section in the entire document. This uses terms that don�t relate to anything, old class names and un-described tables. This needs to be re-written in a more concise way. I did not understand what a custom action was until I reached section 8.2.6 and realized that an action was really a tag implementation. Action is a poor choice of words because not all tags equate to actions. What is the action of an input tag? I understand action when talking about for-loop tags, but not input tags.
    JSF 8.3
    This seems quite contradictory to section JSF 2.8 because it leads the reader to believe that they have no control over the implementation of the use_faces tag and the method of saving the JSF state. That is UNTIL they read section 8.5. These two sections need to be combined to clarify the document.
    Comments:
    I think that JSF is a very good idea in general and that it is a very complicated thing to define (due mostly to the use of HTTP, which is a stateless protocol). There are so many frameworks out there and each has its own benefits and downfalls. However, it is imperative that this specification attempt to solve as many problems as possible and not introduce any more. The spec must be flexible enough to support implementations that drive for speed and those that drive for flexibility. It must also support enormous amounts of flexibility internally because as vendors attempt to comply with it, they want to make as few changes to their own code base as possible.
    Right now, JSF has not accomplished these goals. I think that it needs to consider a lot more than it has and really needs to address the more complex issues.

    Brian,
    I've gone through the EAD spec. and I've come up with
    a detailed list of comments that I'd like to post here
    and also will post over at TheServerSide.com and send
    directly to the JSR group. Thanks for the feedback, it's really appreciated. I've included some comments below. Even though I'm a member of the spec group, these are just my personal comments and do not represent any official position of the group. It's very important that you send feedback you want the spec group to consider to the mail address listed in the spec draft. Some of us read this forum, and try to answer questions and clarify things as best we can, but the only way to make sure the feedback is considered is to send it to the JSR-127 mail address.
    Included document
    JSF 2.1
    Assuming that most applications will be setup like:
    HTML->HTTP->FacesServlet->reconstitute
    phase->validation phase->model phase
    This leads to an enormous amount of duplication as
    well as overhead. The information for each form
    component will be stored in the HttpServletRequest,
    expanded into the UIComponent Tree and stored in the
    FacesContext and finally migrated to the Application�s
    Model. [...]This is not so bad as it may seem, since typically it's not copies of the information that get stored in multiple places, just references to the same object that represents the information.
    Consider a simple text field component that is associated with a model object. The text value arrives with the request to the server which creates a String object to hold it. The UI component that represents the text saves a reference to the same String object and eventually updates the model's reference to point to the same String object. The application back-end eventually gets a reference to the value from the model and, say, saves it in a database. Not until you hit the database do you need to make a copy of the bytes (in the database, not in the JVM).
    JSF 2.3
    This has been tried many times and shown to be
    lacking. Server-side event models do not scale well
    because of the overhead of marshalling and
    unmarshalling the entire HttpServletRequest including
    all the form parameters, so that a single checkbox can
    change the values in a single selectbox (for example).
    The only solution to this problem seems to be the use
    of contained transmission systems, which transmit only
    the needed components state to the server. The server
    can respond with updates to any component, or whatever
    it needs. In order to attempt to accomplish this in a
    web browser, some very extensive JavaScript needs to
    be written which can cause enormous amounts of support
    issues. I think that you�ll find very little need for
    RequestEventHandlers and find that nearly 98%+ of the
    work will be done in the ApplicationEventHandlers.I agree with you that a web app can never be as responsive as a thick-client app unless client-side code (JavaScript) is used. Web apps must be designed with this in mind, which can be a challenge in itself.
    But there are still advantages with an event-based model, namely that it provides a higher abstraction layer than coding directly to the HTTP request data. And even in a web app, having stateful components that generate events simplifies the UI development. As an example, say you have a large set of rows from a database query you want to display a few rows at a time. A stateful component bound to this query result can take care of all the details involved, rendering Next/Previous buttons as needed. Clicking on one of the buttons fires an event that the component itself can handle to adjust the display to the selected row subset.
    Coding the logic for this over and over in each application that needs it (as you need to do without access to powerful components like this) is error prone and boring ;-)
    Finally, JSF components can be smart and generate client-side code as well to provide a more responsive user interface.
    JSF 2.6
    This needs to be rewritten. This contains information
    about the Lifecycle management process before the
    reader knows what that is.Many parts of the draft needs to be rewritten; it's still a work in progress.
    JSF 2.7
    I don�t really like the concept of 1 Tree to 1 page
    yet, but I don�t know why. Need to think about this
    and draw some concrete conclusions about how this is
    lacking and what impacts it will have.I have the same concerns, and think we need to take a close look at how a response can be composed from multiple JSF Trees, or a combination of regular JSP pages and JSF Trees, etc. I know others in the spec group agree that this is a vague area that needs more attention.
    How will applications be able to forward to HTML
    pages? It doesn�t seem possible in the current setup
    without creating Tree objects for pages that don�t
    contain JSF code. Likewise, it seems that the
    requirement of having response Trees dictate the
    outbound page require that every JSP page in the
    entire application use JSF code (in order to seem
    conceptually correct). [...]I don't think this is a problem. The application can decide to redirect (or forward) to any resource it wants when it processes an application event; it doesn't have to generate a new JSF response. But yes, navigation is also an area that needs attention in general.
    JSF 2.8
    The requirement on forcing the Tree to be saved to the
    response or the session seems very restricting. This
    section is very ambiguous about what writing the Tree
    to the response means. [...]It is, isn't it ;-) Again, this is an area that still needs work, and I believe we must be able to provide a lot of flexibility here. Depending on the type of components in the Tree, the size of the Tree, the number of concurrent users and size of the application, etc. different approaches will be needed. How much data must be saved is also dependent on the type of component.
    Additionally, would there be cases where a developer
    would want to send the information from a normal HTML
    page to the JSF system and have it construct a
    UIComponent Tree? This seems likely and not possible
    (?) with the requirement from this section.In that case the request initiated from the HTML page would be directed directly to application code (a servlet, maybe) which would create an appropriate JSF component Tree and generate a response from it.
    If you decide to leave in the local values and model
    values that I disagreed with above, you�ll need to be
    specific about where the values for the response come
    from when encode is called. It they come from the
    local values of the UIComponent, then the application
    logic will need to be responsible for migrating the
    values from the Application�s Model to the
    UIComponent�s local values. If they come from the
    Application�s Model, then every component will need to
    supply model references (I think). [...]I think this is pretty clear in the current EA draft. First, a model is optional for the basic component types (while more complex things, like a DataGrid, may require it). The draft says (in 3.16): "For components that are associated with an object in the model data of an application (that is, components with a non-null model reference expression in the modelReference property), the currentValue() method is used to retrieve the local value if there is one, or to retrieve the underlying model object if there is no local value. If there is no model reference expression, currentValue() returns the local value if any; otherwise it returns null."
    Other parts of the spec (can't find it now) deals with how the local value is set and reset. The effect for the normal case is that if there's a non-null model reference, its value is used, otherwise the local value is used. In special cases, a local value can be set to explicitly ignore the model value.
    JSF 3.5.x
    This was a major concern to me when I wrote both of my
    frameworks. A reusable Validator is excellent because
    it reduces the amount of code duplication. However, it
    is very difficult to tailor messages for specific
    UIComponents using a reusable Vaildator. For example,
    on one page I just a text box for age and on another I
    use if for income. I don�t want my error messages to
    be generic stating that, �This value must be greater
    than 0 and less than X�. I want the user to know what
    must be within the range. [...]I agree that this is a concern. In addition to the solutions you have suggested, I think a way to solve it is by letting validators fire "invalid value" events of different types. These events would contain a default message but also getter method for the interesting parts (e.g. the invalid value, the start and the stop value for an "invalid range" event). An event handler can use the getter methods for the individual values and build a message that's appropriate for the application,
    JSF 5.1.2
    What are the implications of this decision on
    Internationalization? [...]
    JSF 5.1.5
    Messages added to the message queue during validation
    or processing contain Unicode String Objects and could
    be written in any language. The Message Object does
    not contain information about the Locale that the
    message needs to be converted to and this is needed
    for internationalization. [...]I need to read up on the latest i18n proposals, but in general I think you're right that there's more work to do in this area.
    JSF 8.1
    This is possibly the most confusing and poorly written
    section in the entire document. This uses terms that
    don�t relate to anything, old class names and
    un-described tables. [...]The whole JSP layer is still immature, but IMHO, we need to get the API right before we address the JSP issues.
    I did not understand what a custom
    action was until I reached section 8.2.6 and realized
    that an action was really a tag implementation. Action
    is a poor choice of words because not all tags equate
    to actions. What is the action of an input tag? I
    understand action when talking about for-loop tags,
    but not input tags. [...]Actually, "action" is the proper name defined by the JSP specification for what's described in this section. A "JSP action" is represented by an "XML element" in a page, which in turn consists of a "start tag", a "body" and an "end tag", or just an "empty tag".
    Comments:
    I think that JSF is a very good idea in general and
    that it is a very complicated thing to define (due
    mostly to the use of HTTP, which is a stateless
    protocol). There are so many frameworks out there and
    each has its own benefits and downfalls. However, it
    is imperative that this specification attempt to solve
    as many problems as possible and not introduce any
    more. The spec must be flexible enough to support
    implementations that drive for speed and those that
    drive for flexibility. It must also support enormous
    amounts of flexibility internally because as vendors
    attempt to comply with it, they want to make as few
    changes to their own code base as possible.
    Right now, JSF has not accomplished these goals. I
    think that it needs to consider a lot more than it has
    and really needs to address the more complex issues.I agree, and thank you for the feedback. There are many holes yet to be filled and many details to nail down. All of this takes time, since you must build support for the spec among a large number of vendors and other market groups, as well as among developers; this is one of the most important goals for any specification.

  • Storing Foreign Characters in oracle 9i Lite Client Database.

    Hi All,
    My Database configuration Is
    RDBMS VERSION 9.0.1.1.1
    NLS_CHARACTERSET AL32UTF8
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    Operating System Windows 2000
    My Oracle Lite configuration Is
    Oracle9i Lite Release 5.0.2.0.8
    Client Locations:
    Africa,Europe,Asia,America
    Currently all clients store data only in English in the client odb either through java and oracle forms applications.
    We want to now store data for some specific tables, in the clients own language.Also we should be able to retrieve that data and see that data as it was entered.
    For example :
    The client in china after downloading the odb should be able to enter the data for that specific tables in the Chinese language.(They use windows 2000/XP English version).
    1. What are the settings i need to do in the client environment?
    like installing Chinese font or setting NLS_LANG parameter etc...
    2. What are the settings i need to do in the server environment?
    Because there should not be any loss of any data or junk values getting stored instead of actual values entered by clients.
    Also when i query that specific tables in the oracle database after synchronization i should be able to view that data entered by the clients in their own language.
    Eg:data entered by china clients should be visible in chinese fonts,
    data entered by brazil clients should be visible in brazilian fonts,
    data entered by vietnam clients should be visible in vietnamese fonts.
    3. will there be any synchronization issues when clients try to sync after data entry with
    their own languages?
    Please do advice on these issue...........

    I have given you the info of how to set up Oracle Lite for UTF8. I can only point you in the direction for globalization on the enterprise server. Also note that Oracle Lite only support NLS_LENGHT_SEMANTICS=BYTE, your EE database will be CHAR, so you have to alter the session paramter before you create the repository and/or publish your application.
    Here is a Oracle Document on Globalization:
    http://www.oracle.com/technology/tech/globalization/pdf/TWP_AppDev_Unicode_10gR2.pdf
    Here is what Oracle Lite's Developers Guide has on development of linguistic sorts:
    2.11 Support for Linguistic Sort
    Linguistic sort is a feature for the ASCII version of Oracle Database Lite. It produces culturally acceptable order of strings for a specified language or collation sequence. The ASCII version supports several code pages defined by single-byte 8-bit encoding schemes. Each of these code pages is a super set of 7-bit ASCII, and the additional accented characters necessary to support a group of European languages are included in the upper 128 bytes. A new string comparison mechanism is provided that produces strings in a linguistically correct order by mapping each collation element of a string to the corresponding 8-bit value of the supported code page.
    2.11.1 Creating Linguistic Sort Enabled Databases
    The linguistic sort capability must be enabled when the database is created using the CREATEDB command line utility with the <collation_sequence> enabled.
    Note:
    For more information on the CREATEDB utility, see Section A.2, "CREATEDB".
    The behavior of the ORDER_BY clause and the WHERE condition are determined by how the NLS_SORT parameter is implemented. Binary sorting is the default setting, and is used unless the <collation_sequence> parameter is set to use the linguistic sort ordering rules.
    NLSRT is not supported in the current version of Oracle Database Lite. Therefore, NCHAR data type is not yet available.
    2.11.2 How Collation Works
    Collation refers to ordering of strings into a culturally acceptable sequence. A collation sequence is a sequence of all collation elements from an alphabet from smallest collation order to the largest. Once a collation sequence is given, orders of all strings from the same alphabet are fixed. As such, the collation sequence encodes the linguistic requirements on collation. A collation element is the smallest sub-string that can be used by the comparison function to determine the order of two strings.
    2.11.3 Collation Element Examples
    Normally, a collation element is just one character. In binary sorting, only one property, the code value that represents a character, is used. But in linguistic sorting, usually three properties. The primary level of difference is the base character. The secondary level of difference is for diacritical marks on a given base character. The tertiary level of difference is for the case of a given character. Punctuation can function as a fourth level of difference, but comparisons for punctuation occur last and are made at the binary rather than the linguistic level. These are used for each collation element. The following sections contain examples that demonstrate sorting priorities.
    2.11.3.1 Sorting Normal Characters
    This section lists a set of examples that describe how to sort normal characters.
    Example 1
    'a' < 'b'. There is a primary difference between them on the character level.
    Example 2
    'À' > 'a'. This difference occurs on the secondary level. Note that 'À'and 'a' are considered "equal" on the primary level.
    Example 3
    'À' < 'à' in FRENCH but 'À' > 'à' in GERMAN. This difference on the tertiary level. Note that 'À' and 'à' are considered being "equal" on the primary and secondary level. Also note that the case convention may be different for different language.
    Example 4
    'às' < 'at'. This is a difference on the primary level. This example shows the role of difference levels: the lower level differences are ignored if there is a primary level difference anywhere in the strings.
    Example 5
    '+data' < '-data' <'data' <'data-'. If strings are compared and present no difference on the primary, secondary, or tertiary levels, they are compared for punctuation.
    2.11.3.2 Reverse Sorting of French Accents
    Some languages, particularly French, require words to be ordered on the secondary level according to the last accent difference. This behavior is known as French secondary sorting or French accent ordering.
    Example
    'côte' < 'coté' in FRENCH but 'coté' < 'côte' in GERMAN. Note that the secondary difference of 'e' and 'é' occurred later than those of 'ô' and 'o'.
    2.11.3.3 Sorting Contracting Characters
    There are some special cases where two or more characters in a group can function as a single collation element. These types of collation elements are called 'contracting characters' or 'group characters'. In these cases each of these characters properties are assigned appropriate values.
    Example
    'h' < 'ch' < 'i' in XCZECH. Here 'ch' is assigned a primary property value which differentiates it from 'h' and 'i', such that 'h' < 'ch' < 'i'. Note that 'ch' is treated as a single character.
    2.11.3.4 Sorting Expanding Characters
    If a letter sorts as if it were a sequence of more than one letter, it is called an 'expanding character'. For example, in German the sharp s (ß) is treated as if it were a string of two characters 'ss' when comparing with other letters.
    2.11.3.5 Sorting Numeric Characters
    Only sorting of single digit characters from '0' to '9' is currently supported. For the supported European languages a digit character is always sorted as greater than any alphabetic character. For other languages this may be not the same. Other numeric characters such as Roman numeric characters and counting sequences, such as "one", "two", "three", are not supported at this time.
    Example
    '1' > 'z' in any European language, '1' < 'a' in LATVIAN. Note that this difference occurs on the primary level.

  • Problem with Multiple Data Source Retrieval

    Hi,
    We are working on a Project that is concerned with JSF and XML parsing. We have successfully parsed the XML and made an object model- Lists out of it to display it using JSF. Additionaly we also want to display some data from the database compare it with the XML data and then display it on the GUI.
    So e.g we have an attribute code in the XML structure which is a number. Before displaying it on the GUI we first have to query the database what this number or code actually means i.e its description text . get it and display alongside the other data from the XML. So the question is
    1) How to get data from different data sources
    2) Compare the data
    3) and merge and display into GUI .
    And we have to display it with JSF and as far as i know JSF has no comparison mechanism ..??!!
    Thanks for any help,

    You can compare stuff in the EL, but I don't think this is what you need.
    You can just use Java code in the backing bean class for all the business logic. You can use DAO classes for database access logic. Finally for displaying you can use the JSF tags such as h:outputText.

  • Using public views to check function implementation

    Hi All,
    I'm trying to use some public views to check whether the repositories of Development, Acceptance and Production are in sync.
    In order to check the functions and procedures, I came up with the following query to have a simple comparison mechanism:
    SELECT f.function_type
    , f.schema_name
    , f.function_name
    , LENGTH(i.SCRIPT) script_length
    FROM all_iv_functions f
    , all_iv_function_impls i
    WHERE f.FUNCTION_ID = i.FUNCTION_ID
    Unfortunatly I get an error message:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "OWB_OWNER.OWM_VIEW_UTILITIES", line 572
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    The same error occurs when I try to view data of all_iv_function_impls in TOAD.
    Checked the viewdef, it selects (among others) the following:
    OWM_VIEW_UTILITIES.FUNCTION_SCRIPT(im.elementid) AS script.
    Of course the package body is wrapped so no way to see what's happening in there.
    I tried a search on this problem, no result.
    I did a general search on OWM_VIEW_UTILITIES, but no results.
    Checked Metalink, only Note 237082.1 mentions this package but doesn't clarify the situation.
    Any suggestions?
    Cheers, Patrick

    Hi,
    You may try powershell, here are two PowerShell scripts that use
    SMLets to reveal interesting information about user roles in SCSM, please refer to it:
    https://gallery.technet.microsoft.com/Service-Manager-SCSM-User-ebcdfcd6
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Bug: Diff fails for multiple constraints on one column.

    Hi all,
    I think, I came across a bug in SQL Developer:
    SQL Developer version: 3.0.04
    create table USER_A.foo (val NUMBER not null, CHECK (val IN (1,2,3)));
    create table USER_B.foo (val NUMBER not null);
    Comparing these two schemas does not give any difference. However, when not null constraints are removed from both tables, then check constraint difference is correctly reported.

    Does not reproduce in SQL Dev 4. Seems like the comparison mechanism was changed, so can close the topic

  • Performance comparisons between POF & open source serialization mechanism?

    I'm curious whether anyone has done any comparisons of performance and serialized object sizes between POF and open source mechanisms such as Google Protocol Buffers and Thrift, both of which seem to be becoming quite popular. Personally, I dislike having to write a separate schema and then generate classes from it, which Protocol Buffers and Thrift require you to do, and I vastly prefer POF's mechanism of keeping everything in the code (although I wish the POF annotation framework was officially supported). But aside from that, I'd prefer to use Coherence for many of the purposes that some of my co-workers are currently using other solutions for, and this would be useful information to have in making the case.
    FWIW, I hope someone at Oracle is seriously considering open-sourcing POF. I don't think that anyone who would've bought a Coherence license would decide not to because they could get POF for free. They'd just go and use something else, like the aforementioned Protocol Buffers and Thrift. Not only are many companies adopting these as standards, but as has been mentioned in other threads on this forum, that's exactly what even some Coherence users are doing:
    Re: POF compatibility across Coherence versions
    I really wish I could to encourage developers that I work with to give POF a look as an alternative to those two (both of which we're currently using), regardless of whether or not they plan on using Coherence in the immediate future. As things stand right now, I can't use Coherence for code that needs to be shared with people in other groups who haven't adopted Coherence yet. But if I could use POF outside of Coherence, it would probably be acceptable to those folks as a generic serialization mechanism, and it would make migrating such code to Coherence at some point down the road that much easier. If, on the other hand, I have to write that code around, say, Protocol Buffers, then it becomes much harder to later justify creating and maintaining POF as a second serialization mechanism for the same set of objects, which means it's much harder to justify using Coherence for those objects.
    In short, making POF usable outside of Coherence, and who knows, maybe even getting it supported in popular open source projects such as Cassandra (which, as I understand it, uses Thrift) would make it easier to adopt Coherence in environments where objects are already persisted in other systems.
    That's my two cents.

    Hi,
    Thank you for links. It is very interesting.
    I have implemented POF serialization plugin for this benchmark http://wiki.github.com/eishay/jvm-serializers/
    You can get code, run benchmark for yourself and compare result.
    Handmade POF serialization http://gridkit.googlecode.com/svn/wiki/snippets/CoherencePofSerializer.java
    Reflection POF serialization http://gridkit.googlecode.com/svn/wiki/snippets/CoherencePofReflection.java
    Also you should put a two line in BenchmarkRunner.java, all other instructions are on jvm-serializers project page.
              Protobuf.register(groups);
              Thrift.register(groups);
              ActiveMQProtobuf.register(groups);
              Protostuff.register(groups);
              Kryo.register(groups);
              AvroSpecific.register(groups);
              AvroGeneric.register(groups);
    // register POF tests here
              CoherencePofSerializer.register(groups);
              CoherencePofReflection.register(groups);
              CksBinary.register(groups);
              Hessian.register(groups);
              JavaBuiltIn.register(groups);
              JavaManual.register(groups);
              Scala.register(groups);A few comments on result.
    * Micro benchmark is a micro benchmark, I saw quite differnt results then comparis java vs POF vs POF reflection on own domain objects.
    * POF score very good compared to protocols like Protobuf or Thrift, especially on deserialization.
    * Kryo project is quite interesting, I'm going to give it a try in next project for sure.
    Again, thanks a lot for a link.

  • Comparison books Iphoto 4, iPhoto 5 early, iPhoto updated

    I just had delivered last night 5 copies of a hard cover 52 page book.
    First off, very fast turnaround time as it took 5 days from order to delivery. Impressive.
    Half the photo's in these books have been repeated in 2 previous hard cover books, one was made with iPhoto 4 right after that was released and one was made right after iPhoto 5 was released but prior to any updates.
    The initial iPhoto 5 book was made from the default dpi pdf setting and the 5 books were created at a hacked setting of 300dpi.
    The iPhoto 4 book is by far the sharpest, most balanced and cleanest as far as image out of the 3.
    The newest 300dpi books are a slight improvement in sharpness over the previous iphoto 5 book - however the contrast and color balance are quite different on some of the repeat images from the earlier 2 books.
    Somewhere the newest version of iPhoto 5 or the production pdf seems to blow some of the contrast way out. The invisible PDF is not like this and test prints form this same pdf to a high end printer I have access to were not like this.
    The lpi of the images has changed along the way - I will measure the actual screening used tonight and post back. A much coarser screen appears to be used now.
    Addiionally, according to FedEx, my books were shipped from a New York State address unlike the west Coast pick-up point of the previous orders. Maybe the problem is varying equipment/QA, etc., from factory to factory?

    I did take home a line gauge and a loop to give the 3 books a once over. Just a reminder that all 3 books are hardcover, 1 from iPhoto 4 with default dpi, 1 from iPhoto 5 (initial version with default dpi), 1 from iPhoto 5 after updates with increased dpi to 300dpi. The last iPhoto 5 book I had delivered via FedEx shows a New York State pick up point instead of the previous West Coast pick up points. Maybe books are being produced in multiple locations?
    For comparison I looked at common photo's used in each version of the book. The photo's are the same ones from my iPhoto library and were not changed and reproduced the same size,
    Because of the fineness of the screen I could not get an accurate reading on any of the screens but they appear to be 150lpi+ in all books.
    There is a noticeable difference from the iPhoto4 Book and both iPhoto 5 books. The iPhoto 4 book was a finer screen and I would say by a decent enough amount. Both iPhoto 5 books had slightly coarser screens and if they are different between the 2 it would be by a small amount.
    There is a large difference of clarity between both iPhoto 5 books and this is mostly due to the 300dpi change in the prefs file. Though some of it could also be due to equipment differences or quality control as well. The sharpness of the iPhoto4 book is unmatched in the others.
    One item I noticed is a difference in the paper. I did not have a paper gauge to measure the weight of the paper but it seems similar as does the vibrancy and coating of the paper. The iPhoto 5 books have a paper that folds over easier and when turning the page can fold in on itself if the paper is gripped lower towards the spine. This may be the result of the chance the paper grain being vertical instead of horizontal. Maybe this is a result or requirement of the 2 sided printing?
    Also, in both iPhoto 5 books there is a noticeable difference in image tonal range. There appears to be less mid-range tones and an increase in each extreme resulting in images having much more contrast with increased shadow and highlight. This means more blacks/darks, and white areas being more white with less midrange tones. It appears to be more of a problem with single tone images (black and white but saved as rgb and not greyscale) rather than color images. Sepia images exhibit this a great deal as well.
    Why this happens can be anyone's guess but may be as simple as a problem with iPhoto and how it handles the images, to incorrect calibration on the image output, or the actual reproduction mechanism or simply less quality control.
    Lastly, the iPhoto 4 book is top quality, the initial iPhoto 5 book was only OK and print quality certainly improved with iPhoto 5 updated with the 300dpi resolution change but does not equal the level of quality with iPhoto 4. The 5 books I had delivered this week will be used and hopefully appreciated but it is too bad they are not to the initial quality I was excited about.

  • 11g AMM - Diagnostic+Tuning Packs required?

    Am I required to purchase a license for the Diagnostic+Tuning packs in order to utilize Automatic Memory Management (AMM) in Oracle 11g?  I haven't been able to find any documentation online stating so.

    Oracle uses some features internally that you would have to pay for if you used them (even just accessing the tables).  But as long as you don't access them yourself, you are fine.  You can check the view DBA_FEATURE_USAGE_STATISTICS and the access is controlled by Options and Packs CONTROL_MANAGEMENT_PACK_ACCESS
    If you have access to MOS, this is explained in AWR Reporting - Licensing Requirements Clarification (Doc ID 1490798.1)
    Edit:  Oracle Database Comparisons | Oracle Database | Oracle search for "Automatic Memory" and it shows it is available for all editions, even though you can only even buy diagnostics on Enterprise.
    Edit2:  Of course, some things you can use: https://blogs.oracle.com/optimizer/entry/does_the_use_of_sql

  • Recovery Mechanism in Solaris

    Hi to all,
    I am new to Solaris (comimg from HP-UX world) and I was wondering if there is some tool in Solaris world for making exact image of the system and use it afterwards to restore the system as it was at the moment of taking the image.
    HP-UX have such tool called ignite make_tape_recovery and is very handy tool fot this pourpuse.
    Something like this in Solaris?

    dejan.stojcevski wrote:
    Thanks a lot Ivan.
    This answered my question.
    I will search around to learn some more about flash archives and see what they can do too.
    Anyway a little comparisson with HP's make_tape_recovery:
    1. make_tape_recovery creates a bootable tape. No need to boot from instalation CD. You boot directly from the tape. ufsdump is not doing this.
    2. make_tape_recovery does not require to partition the underlying root disk. It is doing this automatically. ufsdump does not have this functionality.
    3. make_tape_recovery is fully automated backup/recovery mechanism mining after you boot from the tape you can return around 1 hour and you will have completly recovered system. ufsdump requires mounting/unmounting of slices.This sounds a lot like SCO's root/boot floppy/tape restore solution.
    Yet I think that this comparison is not correct because Sun's ufsdump and HP's make_tape_recovery are two diferent types of software (different philosophy). Sun's ufsdump is like HP's fsbackup utility - tools for full file system backups. HP's make_tape_recovery <=> Sun's ??? (flash archives maybe?)I don't think Sun has anything like this and the closest you could get would be a Flash archive or a Jumpstart server. And then you would still have to do a restore after a machine has been booted up.
    The closest you could get to something like in the Sun world would probably be "Bare Metal Restore" from Veritas, now Symantec.
    alan

  • Speed of RTTI comparison

    RTTI comparison in Sun C++ 5.8 is unbelievably 60 times slower than g++ 3.4.3. A disassembler checking shows Sun C++ calls into runtime function "const std::type_info&__Crun::get_typeid(void*)" whilst g++ just
    handles it inline. Is there any missed patch that can boost Sun C++ performance of RTTI?
    ~/test $CC -verbose=version
    CC: Sun C++ 5.8 Patch 121018-11 2007/05/02
    ~/test $CC -fast -xO5  -sync_stdio=no -xdepend=yes -xipo=2 -o rtti_comp-CC rtti_comp.cxx
    ~/test $time ./rtti_comp-CC
    real    3m8.994s
    user    3m8.824s
    sys    0m0.056s
    ~/test $g++ --version
    g++ (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
    ~/test $g++ -O2 -o rtti_comp-gcc rtti_comp.cxx
    ~/test $time ./rtti_comp-gcc
    real    0m3.856s
    user    0m3.850s
    sys    0m0.004s
    ~/test $cat rtti_comp.cxx
    #include <iostream>
    class TA { virtual void f()const {}};
    class TB : public TA {};
    using namespace std;
    int main()
        TA *i1 = new TA();
        TA *i2 = new TB();
        const int num = 1000000000;
        for (int i=0; i<num; i++)
            if (typeid(*i1)==typeid(*i2))
                cerr << "*i1 should not be equal to *i2" <<endl;
    }

    The RTTI mechanism in Sun Studio has not changed in several years. There is no update for it.
    RTTI is not normally used in a high-bandwidth part of a program. Other mechanisms that are less expensive are normally more appropriate.
    That is, you normally don't care about the exact type, and use virtual function call dispatch to get the functionality appropriate for the object.
    In cases where the type does matter (if the Window is a ScrollableWIndow, do X else do Y), you normally use dynamic_cast instead of typeid, since a still-more-derived type is usually also OK. Comparing typeid's is not a common operation.
    But whether you use typeid or dynamic_cast, the test usually controls an operation that is orders of magnitude more expensive. Thus, whether RTTI is expensive or cheap ordinarily has no measurable effect on program performance. We therefore made no attempt to cater to special cases. That is, in the general case a vtable walk is needed to find the actual type. In special cases the compiler can generate code to get the typeid directly. We did not make checks for the special cases, but use the same routine for all.
    Does the speed of RTTI have an important effect on your programs performance? If so, I'd be interested to know why.

  • Internal mechanism of Sorted Set

    Hi,
    I want to understand the internal mechanism of how the objects in a SortedSet gets sorted automatically? Also i have a confusion about the objects thar are stored in a Sorted set:-
    1. Is it mandatory for the objects to be stored in Sorted set to override equals and hascode? If yes, are these methods called automatically every time a new object is stored in Sorted Set?
    OR
    2. Is it mandatory for the objects to be stored in Sorted set to implement comparable or comparator interface? If yes, is the compareTo method called automatically every time a new object is stored in Sorted Set?
    Thanks

    Neha_Khands wrote:
    1. Is it mandatory for the objects to be stored in Sorted set to override equals and hascode? No, but probably a very good idea to do so.
    If yes, are these methods called automatically every time a new object is stored in Sorted Set?They might be. It would depend on the particular implementation. The documentation for the given implementation should specify and requirements around those methods.
    OR
    2. Is it mandatory for the objects to be stored in Sorted set to implement comparable or comparator interface?Either the class your storing must implement Comparable, OR you must provide a separate Comparator object (and that Comparator would NOT be implemented by the class you're storing).
    If yes, is the compareTo method called automatically every time a new object is stored in Sorted Set?Yes. Imagine you have a bunch of cards with numbers written on them, and they're laid out in a row, in numerical order, such as [2, 8, 9, 10, 14, 19]. Now you have a card with the number 12 on it. You want to insert it into that group, in the proper place to keep order. To do that, you're going to have to compare that card's value to some of the cards already there, to know where it goes. At some point you'll compare it to 10, find that it's greater, and know that it must therefore go to the right of 10. This comparison is performed "automatically" by you when someone asks you to add (store) a new card into that set.

  • Report for Comparison of Material Qty

    Hi All,
    I need to Develop an Interactive report for Comparison of Material Qty. ordered through Purchase requisition, ordered material through PO and corresponding Material Receipt report.
    Can Someone Give a brief description about this & fields tcode & tables regarding this report.A sample code would be much appreciated.
    Thanks & regards,
    Ravi S

    To get the material number combined with the PO text you will need the help of an ABAP programmer.  The programmer can create a report for you using the function module READ_TEXT in the function group STXD.  The tables to use are:
    STXH - STXD SAPscript text file header
    STXL - STXD SAPscript text file lines
    The selection screen should have at least the following:
    OBJECT - STXH-TDOBJECT
    NAME - STXH-TDNAME
    LANGUAGE - STXH-TDSPRAS
    TEXTID - STXH-TDID
    You find the information for these fields by going to the PO text entry screen and displaying the header information under Goto -> Header.  For materials, the object is MATERIAL, the name is "material number", the language is "EN", and the text ID is BEST.  You can use this program to get long text in lots of places like information records, purchase order texts, etc.
    Hope this helps.

  • How to do comparison in a loop.

    Hi all,
      TYPES: BEGIN OF ZROUTE,
             VBELN TYPE VBELN,
             ROUTE TYPE ROUTE,
            END OF ZROUTE.
      DATA : IT_ZROUTE TYPE STANDARD TABLE OF ZROUTE,
             WA_ZROUTE TYPE ZROUTE.
          LOOP AT I_XVTTP_TAB INTO LW_XVTTP.
            WA_ZROUTE-VBELN = LW_XVTTP-VBELN.
            SELECT SINGLE ROUTE
                    INTO  WA_ZROUTE-ROUTE
                    FROM LIKP
                    WHERE VBELN = LW_XVTTP-VBELN
            APPEND WA_ZROUTE TO IT_ZROUTE.
         ENDLOOP.
    results from IT_ZROUTE
    vbeln    route
    1111     A
    2222     B
    I have some problem in with the code below. I tried to collect vbeln and route into IT_ZROUTE.
    However, after i've got the result, i want to do distiguish between the route.
    If route A ne route B, then display an error messages.
    My problem is, how do i compare the record by looping IT_ZROUTE?  if i set into a temporary variable, the value will always change and i have prb in doing the comparison. eg:
    loop it_zroute into wa_route.
       zroute = wa_route-route. "set into a variable
      if zroute = wa_route-route
       "display error message
      endif.
    endloop.
    Could anyone give me some tips to enhance my code? Really appreciate your help.

    Hi SW,
    You should never use SELECT statement inside LOOP statement as it will affect performance of the program. Use FOR ALL ENTRIES for the same.
    TYPES: BEGIN OF ZROUTE,
    VBELN TYPE VBELN,
    ROUTE TYPE ROUTE,
    END OF ZROUTE.
    DATA : IT_ZROUTE TYPE STANDARD TABLE OF ZROUTE,
    WA_ZROUTE TYPE ZROUTE.
    ************Addition STARTS***********
    data : I_XVTTP_TAB_temp like I_XVTTP_TAB occurs 0 with header line.
    data : begin of likp_itab occurs 0,
                vbeln like likp-vbeln,
                 route like likp-route,
              end of likp_itab.
    I_XVTTP_TAB_temp[] = I_XVTTP_TAB[].
    sort I_XVTTP_TAB_temp by vbeln.
    delete adjacent duplicates from I_XVTTP_TAB_temp comparing vebln.
    select vbeln
              route
        into table likp_itab
       for all entries in I_XVTTP_TAB_temp
    where vbeln eq I_XVTTP_TAB_temp-vbeln.
    if sy-subrc eq 0.
    sort likp_itab by vbeln.
    endif.
    ************Addition ENDS***************
    LOOP AT I_XVTTP_TAB INTO LW_XVTTP.
    WA_ZROUTE-VBELN = LW_XVTTP-VBELN.
    ****************Not required
    SELECT SINGLE ROUTE
    INTO WA_ZROUTE-ROUTE
    FROM LIKP
    WHERE VBELN = LW_XVTTP-VBELN
    ****************Not required
    read table likp_itab witj key vbeln = LW_XVTTP-VBELN binary search.
    if sy-subrc eq 0.
      WA_ZROUTE-ROUTE = likp_itab-route.
    endif.
    APPEND WA_ZROUTE TO IT_ZROUTE.
    *********Add
    clear wa_zroute.
    *********Add
    ENDLOOP.
    Please explain the later part again I am not clear with requirement.
    Regards,
    Anil Salekar

  • Comparison of SSD with hard disk drives

    Attribute or characteristic
    Solid-state drive
    Hard disk drive
    Spin-up time
    Instantaneous.
    May take several seconds. With a large number of drives, spin-up may need to be staggered to limit total power drawn.
    Random access time[45]
    About 0.1 ms - many times faster than HDDs because data is accessed directly from the flash memory
    Ranges from 5–10 ms due to the need to move the heads and wait for the data to rotate under the read/write head
    Read latency time[46]
    Generally low because the data can be read directly from any location; In applications where hard disk seeks are the limiting factor, this results in faster boot and application launch times (see Amdahl's law).[47]
    Generally high since the mechanical components require additional time to get aligned
    Consistent read performance[48]
    Read performance does not change based on where data is stored on an SSD
    If data is written in a fragmented way, reading back the data will have varying response times
    Defragmentation
    SSDs do not benefit from defragmentation because there is little benefit to reading data sequentially and any defragmentation process adds additional writes on the NAND flash that already have a limited cycle life.[49][50]
    HDDs may require defragmentation after continued operations or erasing and writing data, especially involving large files or where the disk space becomes low. [51]
    Acoustic levels
    SSDs have no moving parts and make no sound
    HDDs have moving parts (heads, spindle motor) and have varying levels of sound depending upon model
    Mechanical reliability
    A lack of moving parts virtually eliminates mechanical breakdowns
    HDDs have many moving parts that are all subject to failure over time
    Susceptibility toenvironmental factors[47][52][53]
    No flying heads or rotating platters to fail as a result of shock, altitude, or vibration
    The flying heads and rotating platters are generally susceptible to shock, altitude, and vibration
    Magneticsusceptibility[citation needed]
    No impact on flash memory
    Magnets or magnetic surges can alter data on the media
    Weight and size[52]
    The weight of flash memory and the circuit board material are very light compared to HDDs
    Higher performing HDDs require heavier components than laptop HDDs that are light, but not as light as SSDs
    Parallel operation[citation needed]
    Some flash controllers can have multiple flash chips reading and writing different data simultaneously
    HDDs have multiple heads (one per platter) but they are connected, and share one positioning motor.
    Write longevity
    Solid state drives that use flash memory have a limited number of writes over the life of the drive.[54][55][56][57] SSDs based on DRAM do not have a limited number of writes.
    Magnetic media do not have a limited number of writes.
    Software encryption limitations
    NAND flash memory cannot be overwritten, but has to be rewritten to previously erased blocks. If a software encryption program encrypts data already on the SSD, the overwritten data is still unsecured, unencrypted, and accessible (drive-based hardware encryption does not have this problem). Also data cannot be securely erased by overwriting the original file without special "Secure Erase" procedures built into the drive.[58]
    HDDs can overwrite data directly on the drive in any particular sector.
    Cost
    As of October 2010, NAND flash SSDs cost about (US)$1.40–2.00 per GB
    As of October 2010, HDDs cost about (US)$0.10/GB for 3.5 in and $0.20/GB for 2.5 in drives
    Storage capacity
    As of October 2010, SSDs come in different sizes up to 2TB but are typically 512GB or less[59]
    As of October 2010, HDDs are typically 2-3TB or less
    Read/write performance symmetry
    Less expensive SSDs typically have write speeds significantly lower than their read speeds. Higher performing SSDs and those from particular manufacturers have a balanced read and write speed.[citation needed]
    HDDs generally have symmetrical read and write speeds
    Free block availability andTRIM
    SSD write performance is significantly impacted by the availability of free, programmable blocks. Previously written data blocks that are no longer in use can be reclaimed by TRIM; however, even with TRIM, fewer free, programmable blocks translates into reduced performance.[25][60][61]
    HDDs are not affected by free blocks or the operation (or lack) of the TRIM command
    Power consumption
    High performance flash-based SSDs generally require 1/2 to 1/3 the power of HDDs; High performance DRAM SSDs generally require as much power as HDDs and consume power when the rest of the system is shut down.[62][63]
    High performance HDDs generally require between 12-18 watts; drives designed for notebook computers are typically 2 watts.

    I wish I could get my head round the SSD vs HDD with a NLE rig.  My builder is trying to persuade me to use a Toshiba 256Gb THNSNC256GBSJ for OS and programs, and it is only NZ$20 more expensive than the 450Gb 10k rpm VelociRaptor I was originally planing to use for the OS.  That sounds suspiciously cheap to me, and I am concerned about the finite writes to SSD - mainly because I don't really understand it.  
    The rest of the new build is
    3930K
    Gigabyte X79-UD5
    8 x DDR3 1600
    Coolermaster with 750W PSU
    Geforce GTX570
    I plan to transfer the drives from my current system as a starting point, and reassess after giving it some use.  That means
    Either the above SSD or 450Gb 10k Raptor for OS (new drives)
    300Gb 10k rpm Raptor  (currently used for OS in old box)
    150Gb 7k4 rpm Raptor (reserved for Photoshop Scratch in old box)
    2 x 1Tb WD Blacks (data drives)
    2 x 1Tb WD USB3 externals
    I don't know how I would configure the drives in the new box, but have seen Harm's table and will try to follow his advice.  It's a dreadful thing to admit, but I don't have a backup strategy, and the above drives are well over half full. Well over!  And I am only just getting serious about video, (the rest is mainly CR2 files from my Canon 1Ds3 and 1D4)
    I know it must be like banging your head against the wall, but should I avoid that SSD and go with the 450G Raptor?   I have read a comment that the WD Blacks don't work well as Raid0.  Is that BS or true?
    I am about to give the go-ahead so need to confirm the spec.

Maybe you are looking for

  • How can I find out the overall time and not only the time of a single clip?

    I have set up two cameras in one room. Both cameras record the same thing from different perspectives at the same time. I have found an interesting scene for video 1 (recorded by camera 1). Now I would like to find the time at which the interesting s

  • Dashboard Prompt that does not link to any column

    Hi, This might be a basic question but I am missing something here. How can we create a dashboard Prompt that is not linked to any columns? This prompt has to be a drop down with two values to choose from. 1. Accounting 2. Operational The value chose

  • How can I get a duplicate subscription for ExportPDF cancelled?

    It's like talking to the moon. I have tried umpteen ways of contacting Adobe to get an erroneously double debit of a subscrption for a programme that doesn't even work properly with German texts. HELP! I can be contacted via my email address: [privat

  • Using OBI answers report as source in BIP report

    All, I am creating a BIP report using OBI answers request as source. I defined the data set, created RTF template from MS word and uploaded the same into BIP. I am using dashboard prompts for this reports. I have 2 issues with this. 1. When I embed t

  • One WLC AIR-CT2504-K9 for Main office and 3 Branches

    Hi , I need to configure a WLC AIR-CT2504-K9 with the followings requiremets: 1.- There is a main officce where the WLC and 7 LAPs will be installed. 2.- There are 3 branches; one LAPs in every branch  will be installed. 3.- SSIDs for internal users