What do Development DBAs do?

Im reading this http://www.simple-talk.com/sql/database-administration/what-use-is-a-development-dba/
but could someone please explain the difference between a production DBA and a development DBA?

Maybe it is just practice for the DBA, since there is usually only developers
using Development and test.That depends. May places have DBAs who support developers during the development process. This follows the standard industry wisdom that it is better (cheaper, easier) to fix bugs as close to the point of development as possible. Similarly it is better to get the architectural, infrastructural and data modelling aspects of a database system correct as soon as possible.
Of course, you don't necessarily need to be a DBA to do all of that but it helps to have a designated person who decides things like whether to use transportable tablespaces. That person may not be a full-time development DBA. They may be a part-time developer or they may be a part-time production DBA too.
Cheers, APC

Similar Messages

  • What are best free tools for a Oracle development DBA?

    Hi, I have been production dba for years. I am familiar with full cycle of database installation, configure,backup recovery, etc.
    Now my role is going to change to development dba soon. I have no problem to create instances. Then to the data modeling, data normalization, etc, what kind of tools I can use?
    Any good suggestions or documents you can provide?
    Thanks in advance.

    The only good tools for normalizing your database are you own intelligence, knowledge, and experience. A great place to feed your knowledge and borrow someone else's experience is in Ken Down's blog, The Database Programmer.
    As for PL/SQL development tools, try SDDM's older sister, SQL Developer.
    Edited by: jflack on Dec 6, 2010 9:04 AM

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • What iOS Developer Program license should be used?

    What iOS Developer Program license should be used for the following scenario: our company wants to develop an app to be distributed among employees, our subsidiaries, our service/sales partners (distributors of our products) and finally end-users. The app is primarily aimed at end-user of our product.  We don't want to provide that app via the iTunes App store to general public , but rather only to users, that are well-known to our company or to our sales/service partners.  These users already have one of our products.
    Is such a scenario feasible at all? Comparing iOS Developer and iOS Developer Enterprise it looks like applications can be distributed either through the App Store or through enterprise deployment limited to a company's employees only. Please advise. Thanks!

    If you can reach these forums here, you can reach those links.
    Apps for employees...Enterprise Program.
    Everything else....Individual Program. See the App Store Review Guidelines for iOS Apps for restrictions and cases where apps may be rejected, otherwise.

  • What does the DBA need to do to grant privileges so I can CREATE DIRECTORY

    Hello,
    What does the DBA need to do in order to grant privileges to:
    CREATE OR REPLACE DIRECTORY douglas_my_files as 'C:\Documents and Settings';
    This is what SQL Navigator says:
    [1]: (Error): ORA-01031: insufficient privileges
    Thanks
    Doug

    Also, note that a directory object can only point to a directory on the database server, not on your client machine... Unless you have your own My Documents folder on the database, which would be a tad unusual, this probably won't work regardless of the permissions your DBA gives you.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • HT4623 sir i have installed ios 7 but cant go inside.. i wanna ask what is developer account? and is it paid?

    sir i have installed ios 7 but cant go inside.. i wanna ask what is developer account? and is it paid?

    iOS7 is for Developers only, if you are not a Developer you obtained iOS7 illegally and as such have voided your warranty, and forfeited any help from Apple and these forums as well.
    We can not help you here, you are on your own.

  • What is development class?

    what is development class?

    Hi Sunil,
    Development class is nothing but a package of all development objects. It can also be called as a package.
    When an ABAP Workbench object is created, the system prompts you to assign it to a package. The package should describe the area that the object belongs to.
    The representation of the object tree in the ABAP Workbench (transaction SE80) uses the package as a navigation aid. If there are more than 100 objects of a certain type (that is, ABAP programs), the object tree can no longer be clearly represented and it becomes increasingly difficult to use the ABAP Workbench. In this case, we recommend creating new packages with the same transport layer and distributing the objects to the new packages on the basis of the areas they belong to.
    <b>Package:</b>
    Related objects in the ABAP Workbench are grouped together in a package. The assignment of an object to a package is entered in the object directory (TADIR). The package determines the transport layer that defines the transport attributes of an object.
    The packages are entered in the table TDEVC. They can be maintained in the following transactions:
    Transaction SE80 -> Enter package -> Double-click the package
    The packages are themselves objects of the ABAP Workbench. They belong to their own packages.
    <b>Comparing develpment classes and Packages:</b>
    Packages can be nested.
    Packages can contain their 'visible development objects' (visible outside of the package) in package interfaces.
    Packages can have use access defined for other package interfaces.

  • What iOS Developer Program Should we choose.

    Hello, this is my first question here,
    I work in a development enterprise, and we want develop iOS Applications ONLY for our clients.
    We have our Windows Based Desktop Application; and we want to give iPhone support to our clients, I think this is called B2B (Business to business) Applications. We don't want the app appears in the App Store, we need the app private, because we only want give acces for the app to our clients, and maybe we also need the work with the VPP (Volume Purchase Program).
    Then, what iOS Developer Program should we choose?

    hey was reading through it and um
    Let me put it like this:
    You do not need a iOS subscription to trade apps between clients if you do not intend to distribute them on the App Store.
    this is all you do
    - Buy Mac Computer
    - Download xCode
    - Make Apps
    - Then distribute privately via email or an online app sharing tool.
    I would assume that you would know the clients in person so you could just load the apps for them onto thier devices and the collect the money they owe after everything works well
    Just to clarify for you:
    The paid iOS subscription is to distribuate apps on the App Store.
    Apple does not handle private dealings thats all on you.
    hope this helps you!
    best of luck

  • What is the most demaming develop dba out for as oracle 10g?

    I am look into develop a dba to sale, what is the most demaming database out
    there for company small to mid business?

    OcpJames,
    As I understand it you want to develop an HR application for resale. I developed a HR/Personnel Management System a long time ago. It took a long time to develop mostly due to the many layers of reporting and government regulation that must be accounted for, and was a hard sell. I'm sure by now that has gotten even worse with Sarbanes/Oxley, etc. In addition many states having differing regulations and reporting. I would be extremely surprised if any company would risk running their HR system on a remote server or would use a remote DBA due to the risk involved, since they are directly responsible under the law. However, you might be able to find a niche market where you can fill a need, and that is certainly what I would aim for. Study the marketplace and determine what that would be, try to find a customer that will work with you and go for it.
    Keep Smiling,
    Bob R

  • Doubt in Developer / DBA track

    Hi,
    Im preparing for OCA 11g 1z0-051 and 1z0-052.
    I want to become a DBA.
    When I browse through some forums, I came across terms 'Developer track OCA' / 'DBA track OCA'.
    Could you please let me know what is this developer and dba track about?
    Please sugggest me whether I chose correct certifcation track for DBA?
    Also what are other options that I have inorder to become qualified DBA after clearing OCA.
    Thanks in advance.
    Agathya.

    agathya wrote:
    Hi,
    Im preparing for OCA 11g 1z0-051 and 1z0-052.
    I want to become a DBA.
    When I browse through some forums, I came across terms 'Developer track OCA' / 'DBA track OCA'.
    Could you please let me know what is this developer and dba track about?
    Please sugggest me whether I chose correct certifcation track for DBA?
    Also what are other options that I have inorder to become qualified DBA after clearing OCA.
    Thanks in advance.
    Agathya.You are studying the correct exams to become DBA 11g OCA.
    The next stage would be DBA 11g OCP .... but to get the OCP certificate you must prove you attended a specified hands on training course (can be expensive)
    See and become familiar with [www.oracle.com/education/certification] which is the definitive reference to these things
    Rgds - bigdelboy

  • What every developer should know about bitmaps

    This isn't everything, but it is a good place to start if you are about to use bitmaps in your program. Original article (with bitmaps & nicer formatting) at Moderator edit: link removed
    Virtually every developer will use bitmaps at times in their programming. Or if not in their programming, then in a website, blog, or family photos. Yet many of us don't know the trade-offs between a GIF, JPEG, or PNG file – and there are some major differences there. This is a short post on the basics which will be sufficient for most, and a good start for the rest. Most of this I learned as a game developer (inc. Enemy Nations) where you do need a deep understanding of graphics.
    Bitmaps fundamentally store the color of each pixel. But there are three key components to this:
    1.Storing the color value itself. Most of us are familiar with RGB where it stores the Red, Green, & Blue component of each color. This is actually the least effective method as the human eye can see subtle differences on some parts of the color spectrum more than others. It's also inefficient for many common operations on a color such as brightening it. But it is the simplest for the most common programming tasks and so has become the standard.
    2.The transparency of each pixel. This is critical for the edge of non-rectangular images. A diagonal line, to render best, will be a combination of the color from the line and the color of the underlying pixel. Each pixel needs to have its level of transparency (or actually opacity) set from 0% (show the underlying pixel) to 100% (show just the pixel from the image).
    3.The bitmap metadata. This is informat about the image which can range from color tables and resolution to the owner of the image.
    Compression
    Bitmaps take a lot of data. Or to be more exact, they can take up a lot of bytes. Compression has been the main driver of new bitmap formats over the years. Compression comes in three flavors, palette reduction, lossy & lossless.
    In the early days palette reduction was the most common approach. Some programs used bitmaps that were black & white, so 1 bit per pixel. Now that's squeezing it out. And into the days of Windows 3.1 16 color images (4 bits/pixel) were still in widespread use. But the major use was the case of 8-bits/256 colors for a bitmap. These 256 colors would map to a palette that was part of the bitmap and that palette held a 24-bit color for each entry. This let a program select the 256 colors out of the full spectrum that best displayed the picture.
    This approach was pretty good and mostly failed for flat surfaces that had a very slow transition across the surface. It also hit a major problem early on with the web and windowed operating systems – because the video cards were also 8-bit systems with a single palette for the entire screen. That was fine for a game that owned the entire screen, but not for when images from different sources shared the screen. The solution to this is a standard web palette was created and most browsers, etc. used that palette if there was palette contention.
    Finally, there were some intermediate solutions such as 16-bits/pixel which did provide the entire spectrum, but with a coarse level of granularity where the human eye could see jumps in shade changes. This found little usage because memory prices dropped and video cards jumped quickly from 8-bit to 24-bit in a year.
    Next is lossy compression. Compression is finding patterns that repeat in a file and then in the second case just point back to the first run. What if you have a run of 20 pixels where the only difference in the second run is two of the pixels are redder by a value of 1? The human eye can't see that difference. So you change the second run to match the first and voila, you can compress it. Most lossy compression schemes let you set the level of lossiness.
    This approach does have one serious problem when you use a single color to designate transparency. If that color is shifted by a single bit, it is no longer transparent. This is why lossy formats were used almost exclusively for pictures and never in games.
    Finally comes lossless. This is where the program compresses the snot out of the image with no loss of information. I'm not going to dive into what/how of this except to bring up the point that compressing the images takes substantially more time than decompressing them. So displaying compressed images – fast. Compressing images – not so fast. This can lead to situations where for performance reasons you do not want to store in a lossless format on the fly.
    Transparency
    Transparency comes in three flavors. (If you know an artist who creates web content – have them read this section. It's amazing the number who are clueless on this issue.) The first flavor is none – the bitmap is a rectangle and will obscure every pixel below it.
    The second is a bitmap where a designated color value (most use magenta but it can be any color) means transparent. So other colors are drawn and the magenta pixels are not drawn so the underlying pixel is displayed. This requires rendering the image on a selected background color and the edge pixels that should be partially the image and partially the background pixel then are partially the background color. You see this in practice with 256 color icons where they have perfect edges on a white background yet have a weird white halo effect on their edges on a black background.
    The third flavor is 8 bits of transparency (i.e. 256 values from 0 – 100%) for each pixel. This is what is meant by a 32-bit bitmap, it is 24-bits of color and 8 bits of transparency. This provides an image that has finer graduations than the human eye can discern. One word of warning when talking to artists – they can all produce "32-bit bitmaps." But 95% of them produce ones where every pixel is set to 100% opacity and are clueless about the entire process and the need for transparency. (Game artists are a notable exception – they have been doing this forever.) For a good example of how to do this right take a look at Icon Experience – I think their bitmaps are superb (we use them in AutoTag).
    Resolution
    Many formats have a resolution, normally described as DPI (Dots Per Inch). When viewing a photograph this generally is not an issue. But take the example of a chart rendered as a bitmap. You want the text in the chart to be readable, and you may want it to print cleanly on a 600 DPI printer, but on the screen you want the 600 dots that take up an inch to display using just 96 pixels. The resolution provides this ability. The DPI does not exist in some formats and is optional in others (note: it is not required in any format, but it is unusual for it to be missing in PNG).
    The important issue of DPI is that when rendering a bitmap the user may want the ability to zoom in on and/or to print at the printer's resolution but display at a lower resolution – you need to provide the ability for the calling program to set the DPI. There's a very powerful charting program that is useless except for standard viewing on a monitor – because it renders at 96 DPI and that's it. Don't limit your uses.
    File formats
    Ok, so what file formats should you use? Let's go from most to least useful.
    PNG – 32-bit (or less), lossless compression, small file sizes – what's not to like. Older versions of some browsers (like Internet Explorer) would display the transparent pixels with an off-white color but the newer versions handle it properly. Use this (in 32-bit mode using 8 bits for transparency) for everything.
    ICO – This is the icon file used to represent applications on the desktop, etc. It is a collection of bitmaps which can each be of any resolution and bit depth. For these build it using just 32-bit png files from 16x16 up to 256x256. If your O/S or an application needs a lesser bit depth, it will reduce on the fly – and keep the 8 bits of transparency.
    JPEG – 24-bit only (i.e. no transparency), lossy, small file sizes. There is no reason to use this format unless you have significant numbers of people using old browsers. It's not a bad format, but it is inferior to PNG with no advantages.
    GIF – 8-bit, lossy, very small file sizes. GIF has two unique features. First, you can place multiple GIF bitmaps in a single file with a delay set between each. It will then play through those giving you an animated bitmap. This works on every browser back to the 0.9 versions and it's a smaller file size than a flash file. On the flip side it is only 8 bits and in today's world that tends to look poor (although some artists can do amazing things with just 8 bits). It also has a set color as transparent so it natively supports transparency (of the on/off variety). This is useful if you want animated bitmaps without the overhead of flash or if bandwidth is a major issue.
    BMP (also called DIB) – from 1 up to 32-bit, lossless, large file sizes. There is one case to use this – when speed is the paramount issue. Many 2-D game programs, especially before the graphics cards available today, would store all bitmaps as a BMP/DIB because no decompression was required and that time saving is critical when you are trying to display 60 frames/second for a game.
    TIFF – 32-bit (or less), lossless compression, small file sizes – and no better than PNG. Basically the government and some large companies decided they needed a "standard" so that software in the future could still read these old files. This whole argument makes no sense as PNG fits the bill. But for some customers (like the federal government), it's TIFF instead of PNG. Use this when the customer requests it (but otherwise use PNG).
    Everything Else – Obsolete. If you are creating a bitmap editor then by all means support reading/writing every format around. But for other uses – stick to the 2+4 formats above.
    Edited by: 418479 on Dec 3, 2010 9:54 AM
    Edited by: Darryl Burke -- irrelevant blog link removed

    I don't think the comment about jpeg being inferior to png and having no advantages is fair. The advantage is precisely the smaller file sizes because of lossy compression. Saving an image at 80-90% quality is virtually indistinguishable from a corresponding png image and can be significantly smaller in file size. Case in point, the rocket picture in that blog post is a jpeg, as is the picture of the blogger.
    The statements about the TIFF format is slightly wrong. TIFF is sort of an all encompassing format that's not actually associated with any specific compression. It can be lossless, lossy, or raw. You can have jpeg, jpeg2000, lzw, packbits, or deflate (png) compressed tiff files. There's also a few compressions that specialize in binary images (used alot for faxes). In fact, the tiff format has a mechanism that allows you to use your own undefined compression. This flexibility comes at a price: not all image viewers can open a tiff file, and those that do may not be able to open all tiff files.
    Ultimately though, the main reason people use TIFF is because of its multipage support (like a pdf file), because of those binary compressions (for faxes), and because of its ability include virtually any metadata about the image you want (ex: geographical information in a "GeoTIFF").

  • What is a DBA??

    Hi,
    Could I have a dfeinition of what a DBA is. Throughout my career, I was lead to believe that a DBA facilitated a business in the storage of business information. However, in the contract I'm on now, the DBA believes that he should tell the business what information they need, and how it should be stored, without consultation with any areas of the business.
    C.

    Hi,
    Could I have a dfeinition of what a DBA is.From previous wiki reference
    "Duties: The duties of a database administrator vary and depend on the job description, corporate and Information Technology (IT) policies and the technical features and capabilities of the DBMS being administered."
    From my experience I would agree on that 'it depends'. E.g. on organisation, business requirements, "features" of the applications, etc.
    However, in the contract I'm on now,
    the DBA believes that he should tell the business
    what information they need, and how it should be
    stored, without consultation with any areas of the
    business.This has little to do with the definition of the role of a DBA, and more to do with severe professional misconduct. One of the main objectives for IT/IS department is to serve the business (customers) needs so to enable the business to achieve its goals.

  • What is Developer 6i

    Hi Hussein;
    I need your experience and knowledge one more time my friend; I am confusing about one subject. One of our client usign 11.5.10.2 on linux db version is 9.2.0.8
    Now we want to upgrade their db version to oracle 11g. I had this document from your previous post in one other thread :)
    Note: 452783.1 - Oracle Applications Release 11i with Oracle 11g Release 1 (11.1.0)
    But my supervisor said me we should follow this note too:
    Note: 125767.1 Upgrading Developer 6i with Oracle Applications 11i
    Could u tell me please:
    1. what is this Developer 6i,
    2. how i can find what version i use in my existing instance
    3. what this for using and is it neccessary to make this upgrade for Oracle 11g Db upgrade
    Any information or idea would be soo great Hussein;
    Thanks a lot,
    Helios

    Helios,
    1. what is this Developer 6i,It is the 8.0.6 ORACLE_HOME, where the developer 6i is installed and you can find all executable files (forms, reports, ..etc) under this home. If you still need to have more details about this, refer to "Oracle Applications Concept" manual.
    2. how i can find what version i use in my existing instancehow to find the developer version
    Re: how to find the developer version
    3. what this for using and is it neccessary to make this upgrade for Oracle 11g Db upgradeAs per the 11g upgrade document, you should be on Developer 6i Patchset 18, so if your customer is running on some other version, you need to consider upgrading the developer patchset before starting the database upgrade.
    Regards,
    Hussein

  • What internet development platform do you use

    I didn't know where to post this question, but I figured this group would be the best. I'm a coder building web applications who has recently converted from PC to Mac (which I love, btw). On the PC, I used Dreamweaver to do most of my coding. I'm a line coder rather than a wysiwyg coder. I would use Dreamweaver in code mode.
    My questions is what software are coders using in the Mac world. I know Dreamweaver is available for the Mac, but don't want to purchase it if it's not the best. If anyone has a suggestion for good support forums for coders on the Mac platform that would be great too.
    Thanks,
    Duane

    I use Xcode. It will syntax color HTML, XML, and CSS files. PHP files too I think.
    I design my websites in XML for the content, then I have a makefile target in Xcode that uses XSLT to generate the HTML. I use Safari's debug/develop menu to debug the CSS. Then I check all the sites in IE 6 & 7 with Parallels.

  • What is Apps DBA and  Core DBA?

    Dear Friends,
    I often come across the terms 'Apps DBA'. I have been trying to get it clarified from many, but I am not convinced. I want to know what is the role of Apps DBA?
    What are the duties performed by Apps DBA? How is it different from core DBA?
    I request you to explain on the above subject.
    Thanks and regards
    Bharath Kumar V

    Hi Bharath,
    I want to know what is the role of Apps DBA?That's a common question! An Oracle Applications DBA is very different from a regular Oracle database administrator and requires specialized skills in business administration and Oracle application server architectures. The Oracle Applications DBA job role is less compartmentalized than a traditional Oracle DBA and the Oracle Applications DBA must also have skills in these areas:
    - Database Design - Many shops require customized functional extensions and reporting data marts and the Oracle Applications DBA must have outstanding Database design skills.
    - Oracle Application Server - The Oracle Applications DBA must understand the internals of the Oracle concurrent manager and understand how to monitor and tune Oracle Applications.
    - Functional Expertise - Many shops requires a business degree and a general understanding of the Oracle Applications module. For example, accountants are widely used to support Oracle eBusiness Suite (Oracle Financials), and accountants with an IT background are easily trained in Oracle Applications DBA support.
    I have my full notes here, on theb Apps DBA job roles:
    http://www.dba-oracle.com/t_how_to_become_oracle_applications_dba.htm
    Hope this helps. . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference":
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

Maybe you are looking for