Three part blog about Reducing the Cost to Implement a Security Plan

Part 3 of a great blog done by in AlienVault Support who has "heard it all" about the problems SMBs have in implementing a security plan with small budgets. Kenneth offers lots of practical and helpful advice for IT and security practitioners.
https://www.alienvault.com/blogs/security-essentials/third-step-in-reducing-the-cost-to-implement-a-...
This topic first appeared in the Spiceworks Community

hi Elistariel -
With no texting plan, it is 25 cents per picture message. The LG VX5500 (same phone my daughter has) does not use a memory card, so you can try two different programs on your computer (both free) and see if either one will get the pics off and saved on your computer; from there you can upload to your online album without a per picture charge.
You can try Verizon's VCast media manager - download and install it on your computer, then use the USB cable to link the phone to the computer and transfer the pics with VCast.
Here's a link
A third party program called BitPim will also work, but it's more technical and does a lot more than just transfer your media. It can also brick your phone if you don't know what you are doing, so it's "use at your own risk", as Verizon won't cover any losses due to using BitPim. It does work though--I have used it, very cautiously!

Similar Messages

  • How to reduce the cost for this select script

    Hi,
    "select id from order where cmd like 'Error ID %'"
    This s my script. And the order table has morethan *5lakhs* records, when i select id for the above condition, my sql script cost is - *2200*,and cpu_cost is -- *235660697* i got this above information thro' plan_table, could anyone plz give suggestion to reduce the cost. (fyi -- cmd field has index with not null constraint).
    Thanks.

    user13294228 wrote:
    "select id from order where cmd like 'Error ID %'"
    This s my script. And the order table has morethan *5lakhs* records, when i select id for the above condition, my sql script cost is - *2200*,and cpu_cost is -- *235660697* i got this above information thro' plan_table, could anyone plz give suggestion to reduce the cost. (fyi -- cmd field has index with not null constraint).What does these cost numbers mean? Do you know what it means and what measurement unit is used for these costs? Will a cost of 2000 be fine? 1500? What specific number will make you sit back and think that the cost is now acceptable?
    Or instead, do you not think it is more important to rather look at what the query does and why.. and then determine if there are methods to do that better and faster?
    So let's look at what the query does. It uses a a LIKE predicate. This means that if an index is available, it cannot be used to find a specific indexed value... as the query is not sure what the value is. All that the query knows is what the first couple of characters are for the value.
    So how would an index be used? Have you looked at the execution plan? Do you understand why the CBO made the decision it did?
    Now - how do you expect the CBO to find the relevant rows any faster? The index benefits the query how? As the CBO cannot put the index to any better use than what it is already doing, what other options are there? Can alternative indexing or data structures be considered? Can parallel processing be used?
    These questions are intended to make you analyse the problem - and understand the problem. That is always the 1st step.. solving the problem only comes after this 1st step.

  • Reducing the Cost of the query by indexing.

    Hello,
    I have a query as under :
    SELECT T1_WO.WO_ID, T1_RW_MSG.RMSG_FN_ID, T1_RW_MSG.RMSG_PROD_CD, T1_RW_MSG.RMSG_OPRN_CD, T1_WO.WO_ENTRY_DT, T1_RW_MSG.RMSG_OUR_REF_NUM, T1_WO.WO_CURR_QUEUE, T1_WO.WO_PRIORITY, T1_WO.WO_STATUS_CD, T1_RW_MSG.RMSG_TRANS_MD, T1_RW_MSG.RMSG_SRC_TYP, T1_WO.WO_TYP_FLG, T1_WO.WO_DEQUEUE_DT, T1_RW_MSG.RMSG_BTCH_ID, T1_RW_MSG.RMSG_SEQ_NUM_IN_BTCH, T1_WO.WO_FRM_QUEUE , T1_WO.WO_ORIG_ENTRY_DT,T1_WO.WO_SRC_TYP, T1_RW_MSG.RMSG_TILIS_PROC_LOC FROM T1_RW_MSG, T1_WO WHERE ( T1_RW_MSG.RMSG_FN_ID = T1_WO.RMSG_FN_ID ) AND ( T1_WO.WO_TYP_FLG = 'R') AND ( ( T1_WO.WO_ERR_FLG = 'F' ) AND ( T1_WO.WO_BSY_FLG = 'F' ) AND (T1_WO.WO_CURR_QUEUE IN ('MUMBAISAP','CCD19SAP') OR (T1_WO.WO_CURR_QUEUE IN ('') AND T1_WO.WO_FRM_ROLE_INSTANCE = 'CCD19SAP') OR (T1_WO.WO_CURR_QUEUE IN ('') AND T1_WO.WO_FRM_ROLE_INSTANCE <> 'CCD19SAP')) )
    ORDER BY T1_WO.WO_PRIORITY DESC, T1_WO.WO_ENTRY_DT ASC;
    This query has an ORDER BY clause wherein two columns
    WO_PRIORITY DESC,WO_ENTRY_DT are used which are not indexed.
    These columns are not indexed.Now I try to reduce the cost of these query by indexing these columns WO_PRIORITY DESC,WO_ENTRY_DT.Will this reduce the cost of the query.
    Message was edited by:
    UdayM

    wrap your code in [code] and [/code] tags to preserve the formatting you will do to make it readable. You're unlikely to get much help if you make it hard to read.
    an example of code formatting:
    WITH t1_wo AS (SELECT 'a' wo_curr_queue from dual
                   UNION
                   SELECT to_char(null) from dual
                   UNION
                   SELECT '' from dual)
      SELECT *
      FROM t1_wo
      WHERE t1_wo.wo_curr_queue IN ('')Interestingly, that returns no rows, which makes me suspect that the sections of your query that include t1_wo.wo_curr_queue IN ('') will never work.

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • What are the disadvantages of implementing APO- Demand Planning without improvements or cleaning up the MRP. in other words is DP implementation dependant on MRP process. Gurus please advise

    HI All. What are the disadvantages of implementing APO- Demand Planning without improvements or cleaning up the MRP. in other words is DP implementation dependant on MRP process. Gurus please advise

    Hi Amol,
    DP is the demand planning machine, here you estimate your forecast (future sales).
    The MRP is a supply planning machine, here you use estimate the replenishment.
    Both machines in a technical perspective are independent one each other. Now in a business perspective they are not: The problem that you will have if you don´t clean your MRP elements is not a problem in DP itself. I mean, you can have a very success implementation in DP, your forecast accuracy will be very good.. and you will get a very good forecast, but when the MRP run and estimate the replenishment, the effort that you made in DP will not be translated in good results and your planning situation will be still not good.
    Kind Regards,
    Mariano

  • How to reduce the cost of procedure?

    hi,
    Suppose i have a procedure of more than 1000 lines,during normal processing it takes few seconds.So, how can one reduce the processing for the same.
    Thanx ,
    Piyush

    DBMS_PROFILER is one way to analyse where the time is being spent. You can use this from the command line, or many GUI tools such as PL/SQL Developer provide a convenient interface.
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/tuning.htm
    http://www.dbasupport.com/oracle/ora10g/Unleashing-DBMS_Profiler.shtml
    http://www.allroundautomations.com/plsprofiler.html

  • Understanding the COST column of an explain plan

    Hello,
    I executed the following query, and obtained the corresponding explain plan:
    select * from isis.clas_rost where cour_off_# = 28
    Description COST Cardinality Bytes
    SELECT STATEMENT, GOAL = FIRST_ROWS               2     10     1540
    TABLE ACCESS BY INDEX ROWID     ISIS     CLAS_ROST     2     10     1540
    INDEX RANGE SCAN     ISIS     CLAS_ROST_N2     1 10     
    I don't understand how these cost values add up. What is the significance of the cost in each row of the explain plan output?
    By comparison, here is another plan output for the following query:
    select * from isis.clas_rost where clas_rost_# = 28
    Description COST Cardinality Bytes
    SELECT STATEMENT, GOAL = FIRST_ROWS               1     1     154
    TABLE ACCESS BY INDEX ROWID     ISIS     CLAS_ROST     1     1     154
    INDEX UNIQUE SCAN     ISIS     CLAS_ROST_U1     1 1     
    Thanks!

    For the most part, you probably want to ignore the cost column. The cardinality column is generally what you want to pay attention to.
    Ideally, the cost column is Oracle's estimate of the amount of work that will be required to execute a query. It is a unitless value that attempts to combine the cost of I/O and CPU (depending on the Oracle version and whether CPU costing is enabled) and to scale physical and logical I/O appropriately). As a unitless number, it doesn't really relate to something "real" like the expected number of buffer gets. It is also determined in part by initialization parameters,session settings, system statistics, etc. that may artificially increase or decrease the cost of certain operations.
    Beyond that, however, cost is problematic because it is only as accurate as the optimizer's estimates. If the optimizer's estimates are accurate, that implies that the cost is reasonably representative (in the sense that a query with a cost of 200 will run in less time than a query with a cost of 20000). But if you're looking at a query plan, it's generally because you believe there may be a problem which means that you are inherently suspicious that some of the optimizer's estimates are incorrect. If that's the case, you should generally distrust the cost.
    Justin

  • Can i trade in my ipod touch 2nd gen for the new ipod to reduce the cost of the new ipod touch?

    My ipod touch 2nd gen is had drop once and there is a grey mark on the screen.. i was thinking, can we trade in our old Apple product for a new 1?
    I don mean to say exact trade.. maybe reduce in price for the new 1.. =)

    Hmm.. I am from Malaysia.. so gazelle cant.. i think i may opt to get the 10% discount if i am goin to get the new 1.. =)
    Btw, do you know how?

  • How can i reduce the cost of this explain plan

    Plan
    SELECT STATEMENT  ALL_ROWSCost: 180,804  Bytes: 984,239,144  Cardinality:
    Edited by: user565033 on Dec 11, 2008 8:05 AM
    Edited by: user565033 on Dec 11, 2008 8:06 AM
    Edited by: user565033 on Dec 12, 2008 1:22 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    user565033 wrote:
    2. i am trying to use this query to create a materialized view. My problem is this query returns in under 5mins. But when i try to create the view ( or even a regular table), it goes in a long-op ( sort ouput) whoes time itself is over 4-5 hrs.If it takes 5 min. to return the first rows, but hours to process the complete result set, this suggests that the problem is very likely caused by the function calls, since the sort of the result set must have already been performed to return the first rows and the function calls will probably be performed using the final result set. Try to perform the same without any function calls and check if you can build then a table more quickly.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • I am trying to send a photo in an email and the file is huge. I remember something about sending the file with less detail and therefore reducing the size.  How?

    I am trying to send a photo in an email and the file is huge.  I remember something about reducing the size of the file but of course can't remember how.

    Use a utility such as Graphic Converter to  change the image to a jpeg using options to reduce the size of the converted image.

  • What is the best way to reduce the size of my Library

    Over time I have been extremely lazy and have not managed my Library very well. I have used CS2 along side Aperture and have done all the following:
    1 - Edited files in CS2 and then imported them into Aperture as a TIFF or PSD
    2 - Edited files in CS2 and then imported them into Aperture as a TIFF or PSD then made further adjustments in Aperture
    3 - From Aperture 'opened files with external editor' (CS2) and made adjustments in CS2
    4 - From Aperture 'opened files with external editor' and made adjustments in CS2 then made further adjustments in Aperture
    And to top all this off I have been merrily keywording and changing the Metadata. Switching between CS2 and Aperture in this way has left me with some VERY big files (especially the TIFF that Aperture produces when I use the 'open with external editor function'.
    I am looking for advice on how I can set about reducing the size of some of these files and what a suggested approach might be for 1-4 as each of them presents it's own special problem.
    Here's hoping you can help
    Cheers

    Hard drive capacities keep increasing. Easiest (seriously) is just to buy a larger hard drive. The time expended trying to save space on previously manipulated images is seldom worth the effort.
    -Allen Wicks

  • Question on Blu ray image size output and reducing the size

    Hi
    I have been using encore for blu ray projects which is been proving very helpful. However recently i have had a issue with output size on files. Normally Encore gives me it's estimate that it will fill the disc nicely and provides a image/folder which is about 17-20GB (using 25GB disc) but now out of the blue it's files sizes it outputs has been 24.8GB which technically is less than 25GB but as anyone knows who burns to media, the disc doesn't actually have 25GB of space (something like 23,8 or something, i can't remember without checking). Anyway of course this is a issue because my output size is contantly now coming up to be about 600MB - 1GB over the disc actual size, i figured Encore normally compresses files to fit considering i have burned similar amounts/size media before and never had a issue.
    Anyway my issue is the slightly larger file size, i figured the easiest way to reduce the blu ray output size was to of course play with the transcode settings, which usually are set to the default settings for Blu ray (the quality preset is called something like "1280x720p 50 High Quality MPEG-2") so of course i thought to lower the bitrate a little just for a test, so i did just that and cut the default bit rate from 25[Mbs] to 20, i didn't want to lower it too much as it was a test to see what i could save for file size. Anyway so i re-transcoded the file and built another image, sadly the image size oddly enough came out as 26.something GB... so it actually grew in size by reducing the file bitrate which put me at a loss. I was wondering what common ways i could use to lower this file size just by a few hundred MB or up to 1GB in this case without having to remove content from the disc (it would be a waste to not only seperate the content but to use a whole disc for a leftover 600mb-1gb project). I of course want to keep quality as high as possible but i understand when trying to reduce size quality has to normally be hit in these situations so i can deal with a slight loss of quality. I have been browsing the forums here and though some topics seem similar they have the issue of file sizes coming out as 40GB+.
    Now im no genius with Encore, normally as said the file size is always between 17-20GB as i figure encore was using a "fit to disc" type feature but because the output size is technically below 25gb but not actually below enough to burn i was wondering 2 things:
    1) is it possible to make encore it's self shrink these files a little more, lowering the target size down by 1GB would do wonders but only option i have seen is to have a 25GB disc or DL disc with no option to customize the output size (forcefully lowering it by 1GB).
    2) because i imagine the above can't be done i was wondering how to go about reducing the size by around 1GB or so, i don't need like 10GB freeing up so it's only a small amount, as said this is a issue i have had with the last few projects i made and the size varies from needing 600MB - 1GB (it doesn't usually exceed needing 1GB free space... besides when i lowered the bit rate which increased size a lot). In the past i always just cut the bitrate of files down a little (never by lots but enough to free space up) but clearly that failed my test so im at a loss of how to reduce file size without a huge loss in quality.
    For the record i have already produced the file as both a image file and a folder and both have the same issue.
    Extra info on the project:
    It contains 4 Menu's, 13 video files which vary in lengh from short ones at about 5mintues to longer ones which the largest is about 20minutes. I commonly burn this number of video files/space with fine quality and no issue so i would rather not take files from the project.
    quality preset details are defaultly as said above set to "1280x720p 50 High quality MPEG-2" settings, i am not on my computer with encore at the moment so i can't post full specifics right now but if you have encore handy you should be able to check.
    Tried reducing bit rate but that increased overall size, the actual plan was to hopefully lower the bit rate to around 21 [mbs] (from 25) and then do that for multiple (or all) files until it fitted but after it increased the size i lost hope there, it said the estimated file size (on the individual video) would drop by around 300-400mb  but that didn't work.
    I have tried both folder and iso which always come out the same size.
    Basically my main question/problem is any suggestion to drop the file size by up to 1GB without taking items out the project or a real high loss in quality (i don't mind small loss though as i can apply whatever solution to all files which should help).
    Any suggestions what i can try? as you probably gather im not a genius when it comes to this stuff but i tried including what i thought is relevent.

    Hi and thank you for the reply;
    one problem i seem to be having when reducing the bitrate is the file size is oddly increasing from 24.8GB to 26.4GB, i can't really figure this one out as well it's having the oppisite effect. I checked the "streams" folder on output and decided to compare automatic transcoding to manual lowered bitrate and im not too sure where the increased file size is going, i found which video file it was causing the issue, all videos files range or average around 1.6GB each but one file was taking 4.5GB, after lowering that files bitrate it cut it down to 2GB which was of course saving me 2GB from the old file but the overall size had increased which was odd. Seems the other files are all the same so it hasn't tried improving them (so it seems) so i can't make ends of why the size is increasing.
    I will try playing with the source file to see if i can do anything that way, if not i will try transcoding in premiere (i have had troubles with this before so i try to avoid it).
    Oh and yes i was using Automatic.
    Anyway i will see if editing the file outside might help first and see how that works. Thanks again for the reply.

  • Reduce material cost after invoice verification

    Hi
    I have created GRN for a purchase order for which invoice verification is also done. Now i wanted to reduce the value in material cost. IN the MIRO t code, when i select material tab and enter the material number and the value, the system is giving an error message. it is giving an error message as error in account determination. in Transaction key
    PRD. the system is posting the value in the price difference account. which should not happen. If i post it to PRD account. The value of the material is not reducing.
    Pl let me know the right process to address the issue.

    The below configuration should have to be activated, if you are posting in the GL / Material directly. ( I guess you might have activated already )
    SPRO - Materials Management - LIV - Incoming Invoice - Activate Direct posting to G/L Accounts and Material Accounts
    Material Cost
        If you are trying to reduce the cost of material, then you should use transaction MR21. (Always do this in the current period )
    PO Material Cost
        If you are trying to reduce the PO item cost, then you should use Subsequent Credit in the MIRO.
        But again at the time of posting, there should be sufficient stock to post against the stock account or it will post in the price difference if the material uses MAV price. If the material uses standard price, then also it will post in the price difference account irrespective of the stock availability.
    Hope it helps.

  • How to reduce the size of photos

    I want to reduce the size of photos, for example 50% samller. Please give me the steps in iphoto, but outside prepairing photos for export/mails.

    If your talking about reducing the photo by percentage, from within iPhoto to leave in the Library, there is no way that I know of to do that. You must either Export or Send to Email. And even there, now way to simply select reduce by percentage. You would need to detemine the resolution in pixels of the original and reduce the pixels by half.
    I would use Graphic Converter to do that. It has a scaling feature to reduce by percentage.

  • Costing affect with MRP at planned order level and after production order

    Dear all,
    How the costing affect with MRP at planned order level and after production order completion?
    Edited by: Maulik on Jun 24, 2009 1:59 PM

    Dear Maulik,
    check Mr. Vinod's inputs abt costing
    1. Maintain following master data:
    MM01 Material Master with Costing and accounting view. Make sure all the cost Std, Moving avg etc is maintained along with valuation class in Accounting view 1. In costing tab pls make sure "with qty structure" and "Material Origin" are ticked.
    CS01 BOM with components marked with costing relevance in Status/Long Text Tab. Also check all the qty are correct.
    CR01 Work Center in Basic data tab maintain standard value key sau SAP1 and in costing tab maintain the workcenter, activity type and formulas (Usually standard formulas like for Machine Hours SAP006, Labour SAP007, Setup Activity SAP005 is used).
    KL01 Cretae activity types, in basic data tab maintain ATyp Category AS "1" and cost element (as provided by your FI Guy).
    KP26 Activity pricing, here with the combination of Activity type and the cost center you assign price to the activity.
    2. SPRO Activity
    Maintain Costing sheet IMGControllingProd cost controllingProd Cost PlanningBasic settingoverheadsdefine costing sheet.
    OKKN Define costing variant, you define costing type (In the costing type, you define the purpose of a material cost estimate by specifying, for example, which field in the material master record the costing results can be transferred to like std price, moving avg etc), Valuation Variant (defines how the prices are prioratized), date control and transfer structure.
    3. Costing Process Flow:
    1. CK11N CREATE COSTING RUN
    2. CK24 CREATE PRICE UPDATE (You Mark and release the prce to be updated)
    4. COSTING VARIANT WRT PP
    1. OPL1 DEFINE COSTING VARIANT WRT PP
    2. OKK4 or OPN2 DEFINE VALUATION VARIANT
    3. OKK1 or OPM1 DEFINE COSTING TYPE
    4. OPL8 ORDERTYPE DEPENDENT PARAMETER (assign the above creted costing type for both planned and actual)
    Hope this will throw some light on product costing.
    Regards
    kumar

Maybe you are looking for