Feature engineering is the craft of turning raw data into smarter inputs for models - cleaning, transforming, and creating inputs, so that patterns are easier to learn. People use it to boost accuracy, shrink error, and make models more robust by handling missing values, fixing scale (e.g., normalization), taming skew (log transforms), encoding categories, and building domain-specific signals (ratios, lags, interactions). Good features often let simpler machine learning models outperform complex ones. They also improve interpretability (clear, named signals) and make production systems steadier through reproducible pipelines.
One Scale to Rule Them All
A classic step is standardizing units so every value means the same thing. Today, you’ll turn a list of temperatures reported in mixed units into a single, consistent scale: Kelvin.
A climate lab collects temperature readings from different sensors. Some report in Celsius, some in Fahrenheit, and a few already use Kelvin. The dashboard can only show one scale, so your job is to convert every reading to Kelvin.
The first line of the input contains a single integer n - the number of readings.
Each of the next n lines contains a temperature value followed by a space and a unit (one of C, F, or K, uppercase). The value can be an integer or a decimal.
Print the converted temperatures in Kelvin, one per line, formatted with exactly two digits after the decimal point.