2013
09.25

fohds

Help desk software facilitates support and processes the solutions for the problem of the customers. It offers the ability of centralizing the conversation of all customers. All these conversations are enabled via twitter, Facebook, iPhone, web, email, phone, iPad, Blackberry, Android, online chat, Windows Phone, knowledge bases and community Forums.

Help desk software automates all the processes of your help desk such as such as priorities, dynamic forms and routing rules so you can deal with incidents and issues in the most efficient way. It also helps you to manage and fully control your assets. You can trace your software, mobile assets as well as hardware and observe the inventory of your assets in just a single view. It therefore increases your control over your servers networking and your management over mobile devices. With its remote control capacities, you can resolve issues just anywhere you like. It allows web administrators to remotely assist end users or access the unattended computers and remotely control them. Help desk software provides monitoring of the major network parameters, system processes, servers SNMP traps etc. You can also detect errors instantly and be notified instantly when an action is necessary. Help desk software also features CMDB, Network discovery, SysAid Mobile Device Management capabilities, analytics, manager dashboard, admin portal and live chats among many others.

Customizing Help Desk Software

Customizing your help desk software is important to give you more control and better organization. You may re-arrange the fields, windows, and data and build many templates utilizing a default data so that you can better serve your customers. The default data can provide you quicker data entry for upgrades, assets, changes and other common issues. Customizing your help desk software can give you the power to offer powerful and more satisfactory service to your customers.

Some of the elements of help desk soft ware that you can customize include custom actions, custom fields, custom enhanced work flow, custom templates and built-in themes. You can have this features tailor made for your convenience. When customizing your helpdesk templates, one thing to keep in mind is to make your help desk software easier to use. The easier the help desk software, the better. In your template, you may use the drag and drop option to place organizations, assets, groups, tickets and requests to a designated place that you want for easier navigation. You can also set the number of templates and customize them for ticket tasks, ticket entries, change request tasks and ticket agents. The templates will provide you more options for searching, positioning and viewing of your default data. You can pick from the quick template option to simplify entry data.

2013
08.29

r1Until recently, one advantage that mainframe systems had over PC-based network solutions was a well-deserved reputation based on their ability to provide safe, secure, reliable and quick direct-access storage devices.

That technology has trickled down to the microcomputer world: The Redundant Arrays of Inexpensive Disks (RAID) drive subsystem architecture allows for data redundancy and protection using multiple drives.

At this time, there more than five levels of RAID implementation, with a few under discussion. The main five implementations are not aggregated levels, where each level builds upon the prior level; rather, they are five different methods of implementation.

RAID Level 1 uses mirrored disks to provide complete data redundancy on a one-to-one basis. Each disk has a twin that contains the same data as the primary disk. This is the most common method used in corporate America today.

Novell Inc.’s NetWare provides for two different versions of RAID Level 1: In disk mirroring, two drives are attached to the same disk controller, while in disk duplexing each drive uses its own individual controller. These configurations provide 100 percent data redundancy at a 100 percent (or greater) increase in cost per megabyte.

RAID Level 2 provides for multiple parity drives, and has a slightly lower cost per megabyte because less than 50 percent of the total available storage is used to maintain data integrity.

Although it is prevalent in the supercomputer and mainframe worlds, however, no products that support Level 2 are currently available for the microcomputer market.

RAID Level 3, which provides for a minimum of two data drives plus a dedicated parity drive, is the most commonly available implementation of the technology.

The parity drive is used to maintain the error-correction code (ECC) information necessary to rebuild a failed data drive, and the information on the drives is striped across all the available data drives. Performance is improved over RAID Level 2, and the cost per megabyte drops even further because there is only one parity drive.

RAID Level 4 continues the use of dedicated ECC drives to provide data integrity. But performance increases because files are striped in blocks rather than in bits, as is done in Levels 2 and 3. Block-based striping improves performance because the necessity for synchronization is removed. Level 4 also provides the ability to perform multiple simultaneous reads, which also can significantly improve system throughput.

RAID Level 5 does away with the dedicated ECC drive by striping both data and ECC information across all available drives. Adding to Level 4′s ability to perform multiple simultaneous reads is the more important feature that allows multiple simultaneous writes. RAID Level 5′s greatest advantage is its ability to perform reads and writes in parallel. RAID Level 5 also can provide the lowest cost per megabyte in a redundant data-protection storage scheme.

Advanced features under discussion as part of RAID Level 6 include the ability to recover from multiple concurrent disk failures.

But that isn’t to say that RAID is completely flawless.

“To be frank, RAID arrays do fail quite often,” says Dave Masters, engineer at Hard Drive Recovery Associates in Irvine, a RAID recovery specialist, “In fact, too many people see the redundancy as a replacement for a good backup plan, which of course they should not.”

Fault Tolerance

A non-RAID-specific characteristic becoming more prevalent and tied to the RAID phenomenon is the greater acceptance of fault-tolerant features, which include not only the previously discussed RAID implementations but also hot swapping.

Hot swapping is the ability to replace components without bringing down the disk subsystem. Currently, only hard drives and power supplies benefit from this capability. Failure of the hard-drive controller or SCSI (Small Computer System Interface) adapter still results in system failure.

Many manufacturers are jumping on the RAID bandwagon. Compaq Computer Corp., for instance, started the trend with the Intelligent Drive Array included with its Systempro computers. Dell Computer Corp. followed with its own RAID implementation, which showed greater performance on medium-sized networks. Zeos International Ltd. and Northgate Computer Systems Inc. have also announced RAID products.

Not to be left out, the hard-drive vendor community has also jumped aboard. RAID arrays are now available from manufacturers such as Legacy Storage Systems Inc., Micropolis Corp. and Core International which offer high-end RAID systems with hot-swappable drives and power supplies, with matching high-end price tags.

The enabling technology for RAID, in the form of software that allows operating systems to take advantage of the features provided by the various RAID implementations, is provided to OEMs by software vendors such as Chantal Systems Corp., which provides the software for products such as Micropolis’ Raidion drive array (see accompanying review), and Integra Computing, whose OASIS product provides the software that IBM has been demonstrating to enabled RAID services on high-end PS/2 file-server systems.

2013
08.15

psRecently, I overheard some programmers talking about creating a simple starfield simulation for a screen saver module. You probably have seen one of these before. The stars begin clustered in the center of the screen and move out toward the edges, producing an effect similar to what you might see while looking out of a spaceship window.

The programmers were talking about the best way to plot the path of the stars, and the general consensus was that it should be done using trigonometry and would require a great deal of floating-point calculation. Each star would occupy a given point and traverse a course which would be defined by triangulation using trigonometric functions.

These programmers knew something about trigonometry and immediately understood how it could be applied to their particular problem. Trigonometry was the tool and their manner of thinking was dictated by it.

While this was not entirely unreasonable, and the general method which they proposed will work, there is an even simpler way to solve this which involves no floating point and is much faster. But if you are stuck on the idea that the problem involves trigonometry, you may never discover an easier way. This was a case in which the tool needed to be abandoned and a new one selected.

INTEGERS SCALE THE SLOPE

It is possible to use integers for each star’s location and slope. The speed and direction of each star’s motion can be expressed by the slope. For example, a slope of 2/4 would have the same direction as a slope of 1/2, but it would have a greater speed.

The problem with integers is that they are imprecise. What if you needed a slope of 1.4/2.125? Another significant reason why they felt it was important to use floating point was to avoid round-off errors. In a repeating series of equations, such as would be involved in plotting the path of a moving object or calculating an amortization table, round-off errors would accumulate in integers and produce results that would become more invalid with each iteration until they were completely useless.

Does this mean we have to rule out using integers? No, but we have to be intelligent about it. The right idea is to use scaled integers. Instead of working on a scale of one counting unit per pixel, use something larger. In this month’s program, I use 1,000 units per pixel (see the SCALE constant in the source). This gives me the precision I need while avoiding round-off problems. So, a slope of 1.4/2.125 becomes 1,400/2,125. A scale of 1,000 is not perfectly precise, but it is certainly adequate for my needs.

A slope of 3.54821/2.94516 would become 3,548/2,945 and a little precision would be lost, but this is so small that we can easily ignore it. Remember, too, that floating-point numbers are also integers at heart and represent only an approximation of the true number.

You may have seen this technique before. It is common in financial programs to keep track of money using integers by counting in units of cents or even tenths of cents.

If the precision of fractional numbers is vitally important, the most accurate way to represent them would be as rational integer pairs–one for the numerator and another for the denominator. In this way, you can represent any rational number with great accuracy. For example, there is no way to adequately represent 1/3 as a decimal number.

Somewhere, you have to truncate the repeating 3′s in 0.3333333, so you will be left with only a close approximation. However, if you use an integer pair, you can represent it as 1 (the numerator) and 3 (the denominator). Any additional number can be represented in this way. This is, in fact, the definition for rational numbers. Irrational numbers, such as pi, cannot be represented as integer pairs, but you will still get a more accurate approximation.

MAKING STARS

ps1The rest of the Stars program is pretty straightforward. The main() function sits in a loop waiting for a keypress. As long as there is no keypress ready, it animates the stars. When a key is received, it takes the appropriate action: + increments the speed multiplier in g.Speed; – decrements it; and the Escape key exits the program.

The MoveStars() function loops through each star. First the star is drawn in black (to erase it). The position is then updated by the slope times the speed. Lastly, the star is redrawn at the new location.

The NewStar() function assigns a star to a position close to the center of the screen. To improve the visual effect, a little randomness is thrown in so that all stars will not begin at the same point. The slope is then selected at random, with care taken so that a slope of 0/0 never occurs. Should that happen, the program would eventually wind up with all its stars in the center of the screen and not moving.

The star’s size is selected at random. Ninety percent of the time, it is 1×1 pixel, and 10 percent of the time, it is 2×2 pixels. Finally, the color is selected with 90 percent of the stars as white and the remaining 10 percent as red, gray, blue, or yellow.

That’s all there is to it. All integer arithmetic. No rocket science (if you’ll pardon the pun). You might turn this program into a screen saver or even make it into a backdrop for a space shoot-’em-up game.

THE N-QUEENS AND OTHER PROBLEMS SOLVED

On to old business. I am pleased to announce that we have a winner for the N-queens problem. As you may recall, the N-queens problem is to find a way to place N queens on an NxN chessboard in such a way that no two queens attack each other. The traditional approach involves time-consuming generation and testing of many possible arrangements to find a solution.

Last June, I presented an N-queens solving program and received quite a bit of mail from the readers. Many of you noticed that the solutions frequently contained predictable patterns and tried to design some general pattern-based solution to the problem.

Jeffrey Phelps of Falls Church, Va., has written a program that solves the problem through patterns. His program is available in the library of the ZiffNet Computer Shopper Forum. A detailed description of his method is too large for this column, but it is very nicely documented in his program. Congratulations, Jeff!

Michael Davis of Sumter, S.C., has a question about the November Column, DiskPack. He writes, “I don’t understand what’s wrong with the ‘greedy’ algorithm. Optimum may show me the fewest files to fill up a given disk, but that’s not what I want to do. I want to use the fewest number of disks with the number and sizes of files that I have. Could you go into a little more detail about why the ‘optimum’ algorithm is better?”

For the benefit of readers who are just joining us, I will describe how a “greedy” algorithm works. A greedy algorithm is very simple-minded. It makes no attempt at being clever and just takes the next largest item that will fit. In the case of organizing files to fit on disks, a greedy algorithm would just take the next largest file that will fit on the current disk. If no file will fit in the remaining space on the current disk, it would go on to another disk and put the largest file on it.

In the case of making change, a greedy algorithm would just take the next largest coin denomination and use it. For example, to make change for 23 cents, a greedy algorithm would take two dimes and three pennies.

Five coins is the optimum solution, and the greedy algorithm works. However, our coin denominations were designed (long before there ever were computers!) especially to accommodate the greedy algorithm. It works because each denomination is worth at least twice as much as the next lowest denomination. If that weren’t the case, the greedy algorithm would often fail. Try this example: Imagine a 14-cent coin. Provide 29 cents in change. Greedy Optimum

1 quarter 2 14-cent coins 4 pennies 1 penny 5 coins 3 coins

The same goes with organizing files to fit on disks. Imagine that you want to organize these five files to fit on the fewest number of diskettes. The diskette size is 100K and file sizes are in kilobytes.

READ.ME 40 COMPRESS.COM 40 INSTALL.EXE 30 OPTIMIZE.BAT 30 SORT.COM 30 MERGE.EXE 30

The greedy algorithm would select READ.ME and COMPRESS.COM to go on disk one. This totals 80K; with no space left for further files, it could put INSTALL.EXE, OPTIMIZE.BAT, and SORT.COM would have to go on a third disk by itself.

An optimizing algorithm would put READ.ME, INSTALL, EXE, and OPTIMIZE.BAT on disk one. This totals 100K and there is no waste. COMPRESS.COM, SORT.COM, and MERGE.EXE would go on disk two (also totaling 100K). A third disk would be unnecessary. The optimizing algorithm “looks ahead” and uses fewer disks.

These examples are pretty simple and easily solved by hand, but real-life problems tend to be much more difficult and often require a great deal of calculation (time) to solve. Imagine having to optimize for hundreds of files of widely varying sizes. While practically impossible for a human, this is an ideal job for a computer.

SOLUTION TO THE CHECKERBOARD PROBLEM

A checkerboard alternates squares of light and dark color. Therefore, each domino must cover both a light square and a dark square. Squares in diagonally opposite corners are of the same color, so if you remove them, you will have 30 squares of one color and 32 squares of the other color. It is impossible to cover the board. Seems easy now, doesn’t it? Try this one on your friends.

2013
08.06

tspHere’s an appropriate example of where recursion is absolute gold. How about a searching problem? Have you ever heard of the Traveling Salesman problem? The idea is that a traveling salesman must visit several cities. His time is valuable, so he wants to find the shortest route through all the cities. This problem is about as practical as they come.

Airlines, delivery services, the post office, and just about any businesses that move things welcome the discovery of a more efficient route. It’s worth a lot of money to them. How about yourself? If you travel, wouldn’t it be worthwhile to find faster routes?

An interesting application that finds the shortest routes, not by using factorials, but by applying other means, is Automap, from Automap, Inc., Phoenix, AZ, (602) 893-2400. The Automap program contains road information for the entire U.S. and can find either the shotest route (within its database tables), the quiest route (by referencing other tables), or a mixture of routes. It can then display the routes as directions, or graphically on a map.

The program we’ll study does this very thing, albeit much, much simpler. Given a set of cities, it will find the shortest path that makes a complete tour (returning to the starting city). Of course, this is a small program, designed to demonstrate the method behind the code, but with modification, perhaps it could shave some time off of your daily commute.

Make no mistake, this is not a trivial problem. Much research is directed to discovering better solutions faster. Why is it so hard? Because while there may be billions of routes, there is only one shortest route. For N cities, there exist (N-1)! routes. (There’s that factorial again!)

Where did that formula come from? Let’s say you’ve got to establish a route that visits five citites. Pick a city to begin at; any one will do, since the route will end up at the point of origin. Now, you’ve got four choices for the next city on the route. After you select one of those, you’ve got three cities left–then two–then only one, and then you return to the starting city. That’s where the factorial comes in. With five cities, we have 4*3*2*1 = 24 possible routes.

Cities             Routes
3                  2
4                  6
5                  24
6                  120
7                  840
8                  6,720
9                  60,480
10                 604,800
11                 6,652,800
12                 79,833,600

As you can see, the numbers start to get out of hand fast. This is known as exponential explosion and is the bane of some of the most interesting programs in computer science. (Chess-playing programs are another example of exponential explosion.) Even the fastest computer in the world cannot find the provable shortest path in a network of 50 cities. The program presented here has an upper practical limit of about 15 cities (on a fast PC), unless you want to let it run for a week or two.

Listing 3 presents the program written in Turbo C. If you have just a little experience, you can easily convert the source to Quick C. The only differences will be in the graphics calls. Build the program with the command line “TCC TRAVEL.C GRAPHICS.LIB.” Be certain that the appropriate BGI graphics driver is on your path.

THE ASSUMPTIONS TAKEN

This program makes a few assumptions. Graphically, it assumes an aspect ratio of 1:1 (100 units on the X-axis is the same distance as 100 units on the Y-axis). If you have a VGA display, this will be no problem, but if you have an EGA or other display, it may appear distorted.

For simplicity, it takes into account only the distance between two cities and assumes travel in a straight line in any direction. In real life, though, you usually cannot travel in a straight line, and there may be factors other than merely distance to determine the cost of traveling between two cities. These elements are not difficult to incorporate, but make the program more complex than an example should be, or has room to be.

The first thing that main() does is attempt to enter graphics mode. If that fails, it prints an error message and exits. Next, it sets the drawing mode to XOR[underscore]PUT. This effects duplicating a drawing operation, which becomes the equivalent of performing an erase. You’ll see how this is used in the Search() function.

Listing 2: C Factorial Loop Code

long Factorial (long i) { long Result; for (Result = 1; i > 0; i-) { Result *=i; } return (Result); }

Next, the programs determines the number of cities. The default is six, but a number can be passed on the command line. Technically, this program can handle up to 50 cities, yet you’ll grow very old before it finds a solution to even 30, so I recommend experimenting with just a few cities and working your way up (to the limit of your patience) from there.

Next, main() calls CreateRandom-Cities(), which builds the list of cities and draws each one as a little circle on the screen.

CreateRandomCities() also builds the mileage matrix that contains the distance between each city. The mileage matrix could be generalized into a “cost” matrix, which would take into account not only distance, but other factors such as road conditions, tolls, etc. This data (the city locations and the cost matrix) could be read from a “map” file for a real-world problem, rather than generated randomly.

After the cities are created, the Search() function is called to find the shortest path through the cities. Finally, the solution is displayed on the screen in DrawMinPath().

ENTER RECURSION

erThe heart of this program is the recursive Search() function. It takes three arguments: the next city to search, the current distance traveled, and the number of cities visited so far. Search() first makes a quick check to make sure it hasn’t already traveled further than the distance of the shortest route found so far. Without such a check, the program may end up wasting time on unproductive routes.

Next, it examines the keyboard to see if a key has been hit (which is the program’s fast alternative to rebooting), and then the current city is stored in the path. If Search() has made a complete route, it saves the current path in the minimum path array. Otherwise, Search() must continue by examining the remaining cities.

This is where recursion comes in beautifully, and transforms what would otherwise be a monster-sized program into a small set of code. If the city hasn’t been visited, the program flags it as such as draws a line between the current city and the next one on the screen. Then, it just Search()s it! Search() calls cits (recursion) and conveys to itself the new city to search, the current distance in the path, and the count of the number of cities visited so far. When it comes back from the search, it flags the city as unvisited and redraws the line. Remember that the graphics drawing mode is XOR [underscore] PUT, so this erases the line.

That’s all there is to it. Really! Recursion makes searching a path much simple (I wouldn’t want to do it any other way). There is room for improvement in this program, and you may enjoy enhancing its sophistication and its ability to handle even larger problems.

I can think of one improvement right off the bat. As written, the code blindly chooses the next city to search by simply going through the list and taking them in order. It would be much smarter to choose the next closest city instead. Another improvement would be to generate a reliable (lower-bound) estimate of the remaining distance to travel in an incomplete route in order to detect ahead of time that the route leads to a dead end.

Even the best programs on the fastest computers cannot completely solve much larger routes than this program can, so they use heuristics and other settle for “good” solutions rather than the optimum.

You may wish to experiment and comment-out the line-drawing code in the Search() function, since most of its time is spent just drawing lines. A great deal time time is spent in the kbhit() routine as well, so applying some method to check if less often would speed things up.

Good luck, and happy motoring.

2013
08.01

dmMining companies are using optimization techniques to save literally tons of hours and money. These techniques so far are centered around expenditures.

Expenditures that are related to time rather than to tonnage or production require careful thought, but there is a clear rule that allows you to decide which should be included:

Any expenditure that would stop if mining stopped must be included in one of the costs input to Four-D, and conversely, any expenditure that would not stop if mining stopped must be excluded.

The reasoning behind this is that, when the optimizer adds a block to the pit outline, it may effectively extend the life of the mine. If it does, the extra costs that would occur as a result of this extended life must be paid for. Otherwise the optimizer will add blocks to the pit that reduce rather than increase its real value.

Since the optimizer can only take note of costs expressed through the block values, it is necessary to share these time-related costs between the blocks in some way. How they should be shared depends on whether production is limited by mining, by processing or by the market. Usually it is limited by processing, and, in this case, only the mining of an ore block extends the life of the mine. The ore block values should therefore include an allowance for time costs. This is done by adding an appropriate amount to the processing cost per tonne. If production is limited by mining, as in some heap leach operations, every block that is mined extends the life of the mine, so that time costs should be added to the mining cost. A market limit means that time costs should be added to the selling cost. In each case, the amount added is the time costs per year divided by the throughput limit per year.

During analysis, as distinct from opti-mization, it is possible to handle time costs explicitly.

The reference block

Four-D assumes that all costs that you give it are calculated for a particular block in the model. This block, called the “reference block“, is usually at the surface, but it can be anywhere you nominate. The concept of a reference block is very important in Four-D.

Waste mining and processing costs should be worked out for the reference block even if there is no appropriate material in that block. That is, the reference block may consist entirely of barren material, but you should still work out the processing cost as though the material to be processed was in that block.

Four-D deals with any variation of these costs, such as the increase of mining cost with depth, by the use of “cost adjustment factors”. There can be adjustment factors for waste mining cost and for processing cost for each block in the model. There can be a second adjustment for waste mining cost that depends on rock type.

Extra ore mining costs

Because different equipment may be used, it is not uncommon for the cost per tonne of mining ore to be greater than the cost per tonne of mining waste. For Four-D purposes, this extra cost should be added to the processing cost.

For example, if the costs of mining and processing ore are $1.54 and $7.37 respectively, and the cost of mining waste is $0.82, then, for Four-D purposes, we use a processing cost of $8.09 (=1.54 + 7.37-0.82).

Remember that it is important to calculate these figures initially as though mining were taking place at the reference block, even if there is no mineralized material in the reference block. If the costs are different in other parts of the model, then the differences should be handled by including positional mining and/or processing cost adjustment factors in the model.

Cost ratios

bcOnce you have calculated the various costs then, for optimization purposes, they are input to Four-D as ratios rather than as currency amounts. In effect, the cost of mining undefined waste at the reference block is used as the unit of currency and other costs are expressed in such units.

Thus processing cost is entered as the cost of processing a tonne of material divided by the cost of mining a tonne of undefined waste at the reference block. Rehabilitation cost and selling cost are handled in the same way.

During analysis, after the optimization, you enter the cost of mining undefined waste at the reference block, and the ratios are then used to calculate the processing, rehabilitation and selling costs from this.

Examples

Some examples of the handling of various costs may be helpful and these are discussed below.

Processing mill

Consider a processing mill that costs $10m to build and commission.

If the mine were to be shut down, for whatever reason, on day 2 of operations, the mill would have a certain salvage value, say $6m. In this case $4m has gone for ever. It is an “up-front” or “sunk” cost that must be subtracted from any optimized value of the pit itself, or entered during analysis as an initial capital expenditure. It is not a cost for optimization purposes.

We can deal with the remaining $6m in one of two ways.

If we assume that there will be an on-going program of maintenance and capital replacement that will keep the salvage value of the mill close to $6m in today’s dollars, then the $6m is theoretically recoverable when the mine is closed, and so is not a cost. However the maintenance and periodic capital replacement expenses are costs for these purposes, because they would stop if mining stopped. They should be averaged and treated as a time cost.

Alternatively, we can assume that only essential maintenance will be done, and that the salvage value of the mill will progressively decline. In this case the expected rate of this decline should be treated as a time cost. Note that the rate of decline is not necessarily the same as the depreciation rate that is used by accountants. In most cases the depreciation rate is set by taxation considerations, and may reduce the book value to zero when the salvage value is clearly not zero.

We discuss the interest on the salvage value below.

Trucks

If the expected life of the mine is shorter than the operating life of a truck, then truck purchases can be treated in the same way as the cost of the mill.

If the life of the mine is much longer than the life of a truck, then trucks will have to be purchased progressively to maintain the fleet, and such purchases will stop if mining is stopped. Consequently the cost of purchasing trucks should be averaged out over the life of the mine and treated as a time cost.

Unless the life of the mine is expected to be very long, some compromise between the above two approaches is usually required.

Contract mining companies must take these factors into account when quoting for a job, and it is sometimes useful to think as they do when you are working out the costs for your own fleet. You should include everything that they do, except for their allowance for profit.

Administration costs

On-site administration costs will usually stop if mining is stopped. They must therefore be treated as a time cost.

Head office administration costs may, or may not, stop if mining stops at this particular mine, and thus may, or may not, be included.

Bank loans for initial costs

Repayment (principal and interest) of a bank loan taken out to cover initial set-up costs will have to continue whether mining continues or not. It should therefore not be included in the costs used when calculating block values.

Of course, these repayments will have to come from the cash flow of the mine. If the mine is not going to produce enough cash flow to cover them, the project should not proceed. You should not introduce these repayments as costs in an attempt to “improve” the optimization. The result will be quite the opposite. You will get a smaller pit with a smaller total cash flow.

Although the bank loan repayments themselves are not included, some of the items that the loan was used to pay for may be included, as is explained further below.

Bank loans for recoverable costs

If you borrow money from the bank for day-to-day working capital or for items, such as the $6m discussed in the mill example above, then you can reasonably expect to repay the loan if mining stops. Consequently the interest paid on such a loan is a cost that stops if mining stops. It should therefore be treated as a time cost. Note that Four-D works throughout in today’s currency, so the interest rate used should not include an allowance for inflation.

Grade control costs

It is often necessary to do grade control work on waste as well as ore. In this case, grade control costs apply to waste costs too. If only some of the waste is grade controlled, then the correct way to handle it is to load the cost of those particular waste blocks. However many users make an estimate of the tonnes of such waste per tonne of ore, and load the cost of mining ore.

Support – cable bolts

If the permitted pit wall slope is to be increased by the use of cable bolts, the cost per tonne is related to pit size, which has to be estimated. Then a cost per square metre of wall can be transformed into a cost per tonne of waste. This is an iterative estimate, but fortunately costs per tonne are usually low.

These examples do not cover all possible costs, but should indicate how to treat most costs.

2013
07.27

mcThe application itself may also test to see if a math coprocessor is installed. During initialization, it tests for the math coprocessor in the same way the BIOS does. In fact, it is becoming more common for applications themselves to test for the presence of the math coprocessor as well as other parameters, and then configure themselves automatically to operate with the hardware that is installed.

If no math coprocessor is present, the CPU will perform the math calculations using lengthy software instructions that emulate the math coprocessor’s built-in functions. While the emulation may occur within the operating system, it is more likely to occur within the application program. Thus, when a math coprocessor instruction is encountered, the application will execute the emulation subroutine. Software emulation performance speeds are much slower than those of math coprocessors. This is the primary reason why many CAD programs can’t operate without a math coprocessor–their execution speed would be unacceptably slow.

BINARY NUMBERS

Computers operate using base 2, or binary numbers. That is, only two numerals (0 and 1) represent all values. We’re accustomed to the decimal system, which has 10 numerals, 0 through 9. The binary system is used with computers (and digital electronics in general) because binary values can be represented as “off” and “on.”

INTEGERS AND REAL NUMBERS

In mathematics, there are several number systems. The two that are relevant here are integers and real numbers. Integers are the set of all whole numbers, both negative and positive, and zero; there are no fractions. The real number system includes integers and all fractions.

A microprocessor is optimized to handle integer arithmetic. In other words, CPUs are adept at performing mathematical operations on whole numbers (in this case, whole binary numbers). Calculations using real numbers are executed using integers to approximate the real values. The math coprocessor, on the other hand, is optimized to handle real numbers.

FLOATING POINT REPRESENTATION

The 32-bit word of a 386 or 486 CPU can represent the integers -231 through +231 (or approximately -2 billion to +2 billion in decimal). The 16-bit word in the 286 CPU can represent the values 32,768 through +32,768. In both cases, one bit must be reserved for the sign. Neither range covers enough values to be useful in personal computer applications.

To accommodate larger values, multiple precision representation is used. For example, double precision uses two 32-bit words to represent a single integer value. This results in 63 bits, plus a bit for the sign. Larger values can be handled using multiple words. This, however, requires more CPU time. Arithmetic performed on each word of a multiple precision number can result in a carry (or borrow), which must be added (or subtracted) from the upper word. Software can handle this easily, but must carry out several instructions to do so. Multiple precision arithmetic takes longer than single precision.

Math-intensive applications use real numbers, not just integers. Without a math coprocessor, a method must be used to represent real numbers within the capabilities of the CPU’s format. To do this, real numbers are scaled so they can be represented as integers. Scaling simply means multiplying an integer by another value.

Using this method, the 32-bit word could be scaled to represent a much larger range of numbers, and could represent real numbers as well as integers. There are, however, limitations imposed when the CPU does arithmetic with scaled, fixed precision integers. For example, if two large 32-bit numbers are multiplied, the result will be larger than 32 bits; an overflow will then occur, resulting in an error. Another error can result when dividing two very large, but nearly equal numbers. The calculated answer will be too small to represent and an underflow occurs. The third limitation occurs due to rounding errors. These limitations can be avoided by having the CPU read multiple words, however, this slows down the computer considerably.

A better way of representing a large range of real numbers is to use scientific notation. Scientific notation is simply a way of scaling values. The scaling factor is always a power of 10, and the number being scaled always has a single digit to the left of the decimal point, so it is not written as an integer. Here are some examples of scientific notation: 3.2 x [10.sup.1] = 32 -6.250 x [10.sup.3] = -6,250 2.5 x [10.sup.-1] = 0.25 (note that [10.sup.-1] is 0.1) 3.0 x [10.sup.-4] = 0.0003

Note that in floating point representation, the decimal point “floats,” so that it always follows the first digit. This makes it easy to keep track of where it belongs.

INTEGER VS. FLOATING POINT

mpAs mentioned, computers work with two different representations of numbers: integers and floating point numbers. Integers are “whole” numbers such as 1, 13, and 529. Much of the math used in computer application programs is performed on integers. For example, if you give a spreadsheet the command to “go to line 115″ from line 10, the program moves down 105 lines (115 – 10). Lines, of course, are only expressed in whole numbers. On the other hand, other mathematical operations require fractions. Fractions are always represented as decimals rather than as a ratio (such as 1/2).

It’s sometimes difficult to work with two numbers that differ greatly in size, such as 1,593.0 and 0.0001. Scientific notation was invented to make such calculations easier. In scientific notation, only one digit precedes the decimal point, while the remaining digits follow behind the point. This number is then multiplied (or “scaled”) by a power of 10. For example, the scientific notation of the two numbers above would be 1.593 x [10.sup.3] and 1.0 x [10.sup.-4]. The power of 10 is always the number of digits that the decimal point has been moved–positive when moving to the left and negative when moving to the right. This notation, using a base number (called the significand) and the power (called the exponent), is also called floating point, since the decimal point “floats” to that position which leaves one digit to its left.

Computers work with binary numbers rather than the more familiar decimals. Floating point numbers in binary are represented the same way as they are in decimal, except they have a “binary point” instead of a decimal point and they are calculated in base 2, rather than base 10.

In floating point representation, there are three parts to the number: the sign, preceding the number; the number to be scaled, called the significand; and the scaling factor, called the exponent.

A 32-bit word can be used to hold the sign, the significand, and the exponent (all in binary), and represents a wide range of real numbers. Double precision extends the range even further.

Floating point representation makes real-number arithmetic easier, too. For addition and subtraction, the scaled numbers are simply added or subtracted. For example: (2.345 x [10.sup.4]) + (3.227 x [10.sup.4]) = 5.572 x [10.sup.4]

If the exponent is not the same, the significand must be adjusted. To add 4.453 x [10.sup.5] and 2.372 x [10.sup.3], the second number must be adjusted so its exponent is [10.sup.5]; thus, the addition would be: (4.453 x [10.sup.5]) + (0.02372 x [10.sup.5]) = 4.47672 x [10.sup.5]

Scientific notation makes calculations of large numbers simple. The following example shows how floating point notation eases the execution of mathematical functions that have very large or very small numbers (or worse, a combination of the two). Avogardro’s Number is an example of a very large number and the intrinsic charge on an electron is an example of a very small number. Avogadro’s Number is 6.022 x [10.sup.23] and the charge on an electron is 1.602 x [10.sup.-19]. Neither of these numbers could “fit” into the CPU without floating point representation; one is too large and the other too small.

To multiply the two, simply multiply the significand values and add the exponents: (6.022 x [10.sup.23]) x (1.602 x [10.sup.-19]) = 9.647 x [10.sup.4]

This is how the computer handles floating point arithmetic, except it uses binary instead of decimal. While this eases the processor’s workload, it still requires several memory fetches for the CPU; double precision requires even more. The CPU actually emulates floating point arithmetic by using multiple registers and multiple instructions to perform a single floating point arithmetic operation.

MANAGING THE FLOW

The math coprocessor is designed to eliminate potential problems (such as overflow and underflow), as well as the time-consuming emulation process necessary to complete floating point operations.

First, the internal registers on a coprocessor are very large, making overflow and underflow almost impossible. In fact, the internal registers of the math coprocessor can represent numbers as large as [10.sup.4,932] or as small as [10.sup.-4,932]. To put this into perspective, the larger number is a “1″ followed by nearly 5,000 zeros, and the smaller number is a decimal point followed by almost 5,000 zeros and a “1.” These large registers also practically eliminate any problems with rounding.

Second, the math coprocessor’s internal operating instructions are written specifically to work with floating point numbers. Because the microcode is optimized, the math coprocessor executes floating point arithmetic very quickly.

Finally, the math coprocessor is built with direct instructions for trigonometric and logarithmic functions. These calculations would normally require that a fairly long algorithm be calculated by the CPU. This is why programs with trig functions or logarithms show the most substantial improvement after the addition of a math coprocessor.

MATH-COPROCESSOR FUNCTIONS

The math coprocessor usually has six different types of instructions. Three of these are non-mathematical and are used for moving data, comparing data, and controlling the coprocessor. The other three instruction types are mathematical.

The first of these involves constant instructions. These allow a mathematical constant, such as 1.0 or Pi, to be quickly retrieved for calculations, making the process much faster, since the constants don’t have to be retrieved from memory.

Non-transcendental functions consist of common mathematical operations, including addition, subtraction, multiplication, division, square root, absolute value, rounding, and other numerical manipulations.

Finally, transcendental functions allow the math coprocessor to execute trigonometric and logarithmic operations. These include sine, cosine, tangent, and several base 2 logs and antilogs.

This rich set of mathematical operations allows the math coprocessor to execute many operations with a single instruction–operations that, if emulated with the CPU, would require many instructions.

The i486 integrated CPU/math coprocessor is even more advanced, as the two components are completely coupled on one silicon device and can achieve higher performance than the i387 non-integrated model. The actual floating point registers used within the math coprocessors are identical, thereby ensuring software compatibility.

Math coprocessor operation centers around six internal register types: Data, Tag Word, Status Word, Instruction and Data pointers, and Control Word.

Data registers are composed of eight 80-bit registers. Depending on how much precision is required by the software, a portion or all of these registers will be used. These registers can be thought of as a stack; the math coprocessor’s numeric instructions can address the data either in the registers relative to the “top” of the stack or on the data in the “top” register. This provides more flexibility for programmers creating subroutines in their code.

Tag Word marks the content of each of the data registers and helps optimize the math coprocessor’s performance by identifying empty registers. Tag Word also simplifies exception handling by eliminating complex decoding operations typically required in a CPU exception routine.

The 16-bit Status Word is used to report the overall status of the math coprocessor. Through a series of codes, a host of exception conditions and busy codes can be reported by Status Word. For example, if the math coprocessor detects an underflow, overflow, precision error, or other invalid operation, it will indicate this in Status Word.

The Instruction and Data pointers are used to pass information about instructions or data in memory back to the CPU in case of an exception. Because the math coprocessor can operate in one of four modes: 32-bit protected, 32-bit real, 16-bit protected, or 16-bit real, these registers will appear differently, depending on the operating mode. Programmers can use the information in these registers to initiate their own error handlers or subroutines.

Control Word is used by the software to define numeric precision, rounding, and exception masking operations. The precision options are used primarily to provide compatibility with earlier generations of math coprocessors that have less than 64-bit precision.

In addition to the main registers discussed here, the math coprocessor also provides six debug and five test registers. These registers are intended for programmers’ use during application development.

MATH-COPROCESSOR OPERATION IN YOUR COMPUTER

For programmers, newer math coprocessors are viewed as part of the CPU. That is, programmers can write their code with math-coprocessor instructions included along with the CPU instructions. The code can easily test for the presence of a math coprocessor in the PC. Then, if the application created is running on a PC with a math coprocessor installed, the math-specific instruction will execute on the math coprocessor.

In assembly language, all math-coprocessor instructions start with an “F,” as in FADD, whereas the corresponding CPU instruction is ADD. Above right is a short example of some 386/387 assembly-language code that uses the math coprocessor to calculate the circumference of a circle.

In most programs, if the math coprocessor is absent, the CPU will automatically emulate the math function using a long series of CPU instructions. As expected, however, the math coprocessor will execute the specific math function much faster than the CPU.

For example, a floating point division calculation takes about 24 microseconds with an 8086 CPU and 8087 math coprocessor combination. Without the math coprocessor, the 8086 takes about 2,000 microseconds to complete the calculation.

If the programmer is certain that a math coprocessor is present, the code can be highly optimized to rely heavily on the functions performed best by the math coprocessor.

MATH INSTRUCTIONS

Instructions for the math coprocessor differ from those for the CPU. o alert the CPU that a math-coprocessor instruction is coming; it is preceded by an ESCape command. When the CPU reads this instruction, it knows the following instruction and data (if any) are for the math coprocessor. However, all communications between the CPU and math coprocessor are transparent to the application program. Master synchronization between the chips is handled by hardware. In most systems, the math coprocessor operates at the same clock rate as the CPU, although some math coprocessors have the capability to operate from a separate, asynchronous clock.

The CPU then passes the instruction to the math coprocessor, which signals the CPU when it is ready to accept the data. The data, or operand, is either held by the math coprocessor, or used in an arithmetic operation in conjunction with a number already in the math coprocessor. When the math coprocessor has all the data it needs, it executes the proper mathematical function by accessing the internal microcode defined by that particular instruction.

The instruction for the math coprocessor does not always require data to be fetched. For example, if your spreadsheet cell had the equation SQRT(C4*D2), the math coprocessor would first retrieve the data for cells C4 and D2. It would then multiply them and hold the result. Next, it would be given the SQRT (square root) instruction. The data for this instruction (the product of C4 and D2) is already held, so it’s unnecessary to fetch it from memory.

Therefore, not only does the specialized SQRT function itself save a lot of time, but because the data was already held in the math coprocessor, the calculation as a whole takes less time. The CPU, executing this same function, might require many more memory accesses and a great deal more time, since it would have to execute an algorithm to calculate the square root.

You might be wondering at this point what the CPU is doing while the math coprocessor is performing the calculation. With many applications, it is briefly waiting for the math coprocessor to finish. However, newer application programs take advantage of this time to execute CPU instructions concurrently.

That is, while the math coprocessor is performing its calculations, the CPU continues to execute the application program. If the CPU gets to an instruction that requires the results from the math coprocessor, it has to wait until the math coprocessor is finished.

In spite of the brief waiting, the CPU/math coprocessor combination will still execute the program faster than the CPU could by itself. Borland’s Quattro Pro is an example of a program that takes advantage of concurrent processing when a math coprocessor is present.

Installing a math coprocessor in your PC may be the most effective performance boost you can buy, without moving to a faster, more expensive machine. This is especially true for programs that are specifically written to take advantage of a math coprocessor. You can contact your applications-software developer to determine precisely how much you will benefit by adding a math coprocessor to your PC.